<?xml version="1.0"?>
<News hasArchived="false" page="1" pageCount="1" pageSize="10" timestamp="Mon, 20 Apr 2026 17:13:14 -0400" url="https://beta.my.umbc.edu/groups/csee/posts.xml?tag=computer-vision">
<NewsItem contentIssues="true" id="141170" important="false" status="posted" url="https://beta.my.umbc.edu/groups/csee/posts/141170">
<Title>Talk: visible-thermal images for medical applications, 4/24</Title>
<Tagline>4-5:15 pm ET, Wed., April 24, 2024 in ENGR 231 and online</Tagline>
<Body>
<![CDATA[
    <div class="html-content"><img src="https://ai.umbc.edu/wp-content/uploads/sites/734/2024/04/ordun.jpg" style="max-width: 100%; height: auto;"><div><br></div><div><div><strong>Visible-Thermal Image Registration and Translation for Remote Medical Applications</strong></div><div><br></div><div><strong><a href="https://www.linkedin.com/in/catherine-ordun/" rel="nofollow external" class="bo">Catherine Ordun</a>, Booz Allen Hamilton</strong></div><div><br></div><div><strong>4-5:15 pm ET, Wednesday, April 24, 2024</strong></div><div><strong>UMBC, ENGR 231 and <a href="https://umbc.webex.com/meet/gokhale" rel="nofollow external" class="bo">Webex</a></strong></div><div><br></div><div>Thermal imagery captured in the Long Wave Infrared (LWIR) spectrum has long-played a vital role in thermal physiology. Signs of stress and inflammation which are unseen in the visible spectrum, can be detected in LWIR due to principles of blackbody radiation. As a result, thermal facial imagery provides a unique modality for physiological assessment of states such as chronic pain. In this presentation, I will provide a presentation of my research into image registration to align visible-thermal images that serve as a prerequisite for image- to-image translation using conditional <a href="https://en.wikipedia.org/wiki/Generative_adversarial_network" rel="nofollow external" class="bo">GANs</a> and <a href="https://en.wikipedia.org/wiki/Diffusion_model" rel="nofollow external" class="bo">Diffusion Models</a>. I will share recent work leading research with the National Institutes of Health applying this research in a real-world setting on cancer patients suffering from chronic pain.</div><div><br></div><div><a href="https://www.linkedin.com/in/catherine-ordun/" rel="nofollow external" class="bo">Dr. Catherine Ordun</a> is a Vice President at Booz Allen Hamilton, leading AI Rapid Prototyping and Tech Transfer solutions for mission-critical problems for the Federal Government. She drives AI rapid prototyping to support mission-critical proof-of-concepts across multiple AI domains, in addition to AI tech transfer to support algorithm reuse and consumption. She also leads multimodal AI research supporting the National Cancer Institute for chronic cancer pain detection. Dr. Ordun is a Ph.D. graduate of the UMBC Department of Information Systems advised by Drs. Sanjay Purushotham and Edward Raff, and obtained her bachelors degree from Georgia Tech, masters from Emory, and an MBA from GWU Business School. She also has an appointment at UMBC as Adjunct Research Assistant Professor.</div></div></div>
]]>
</Body>
<Summary>Visible-Thermal Image Registration and Translation for Remote Medical Applications     Catherine Ordun, Booz Allen Hamilton     4-5:15 pm ET, Wednesday, April 24, 2024  UMBC, ENGR 231 and Webex...</Summary>
<TrackingUrl>https://beta.my.umbc.edu/api/v0/pixel/news/141170/guest@my.umbc.edu/ee5fc9f58c1f347c5db043b17fa4e6ba/api/pixel</TrackingUrl>
<Tag>ai</Tag>
<Tag>computer-vision</Tag>
<Tag>diffusion-model</Tag>
<Tag>gan</Tag>
<Tag>images</Tag>
<Group token="csee">Computer Science and Electrical Engineering</Group>
<GroupUrl>https://beta.my.umbc.edu/groups/csee</GroupUrl>
<AvatarUrl>https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xsmall.png?1314043393</AvatarUrl>
<AvatarUrl size="original">https://assets1-beta.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/original.png?1314043393</AvatarUrl>
<AvatarUrl size="xxlarge">https://assets1-beta.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xxlarge.png?1314043393</AvatarUrl>
<AvatarUrl size="xlarge">https://assets4-beta.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xlarge.png?1314043393</AvatarUrl>
<AvatarUrl size="large">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/large.png?1314043393</AvatarUrl>
<AvatarUrl size="medium">https://assets1-beta.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/medium.png?1314043393</AvatarUrl>
<AvatarUrl size="small">https://assets2-beta.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/small.png?1314043393</AvatarUrl>
<AvatarUrl size="xsmall">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xsmall.png?1314043393</AvatarUrl>
<AvatarUrl size="xxsmall">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xxsmall.png?1314043393</AvatarUrl>
<Sponsor>Computer Science and Electrical Engineering</Sponsor>
<PawCount>0</PawCount>
<CommentCount>0</CommentCount>
<CommentsAllowed>true</CommentsAllowed>
<PostedAt>Mon, 22 Apr 2024 09:43:36 -0400</PostedAt>
</NewsItem>

<NewsItem contentIssues="true" id="138703" important="false" status="posted" url="https://beta.my.umbc.edu/groups/csee/posts/138703">
<Title>Talk: Visual Concept Learning Beyond Appearances, 3:30pm 2/8</Title>
<Tagline>Modernizing some classic ideas</Tagline>
<Body>
<![CDATA[
    <div class="html-content"><div><strong>PPR Distinguished Speaker</strong></div><div><br></div><div><strong>Visual Concept Learning Beyond Appearances: Modernizing a Couple of Classic Ideas</strong></div><div><strong><br></strong></div><div><strong><a href="https://yezhouyang.engineering.asu.edu/" rel="nofollow external" class="bo">Dr. Yezhou Yang</a></strong></div><div><strong>Arizona State University</strong></div><div><br></div><div><strong>3:30-4:45 pm ET, Thur. Feb. 8, 2024</strong></div><div><strong>ITE 325b &amp; via <a href="https://umbc.webex.com/meet/gokhale" rel="nofollow external" class="bo">WebEx</a></strong></div><div><br></div><div><br></div><div>The goal of <a href="https://en.wikipedia.org/wiki/Computer_visionhttps://en.wikipedia.org/wiki/Computer_vision" rel="nofollow external" class="bo">Computer Vision</a>, as coined by <a href="https://en.wikipedia.org/wiki/David_Marr_(neuroscientist)" rel="nofollow external" class="bo">Marr</a>, is to develop algorithms to answer "What are", "Where at", "When from" visual appearance. The speaker, among others, recognizes the importance of studying underlying entities and relations beyond visual appearance, following an <a href="https://en.wikipedia.org/wiki/Active_perception" rel="nofollow external" class="bo">Active Perception</a> paradigm. This talk will present the speaker's efforts over the last decade, ranging from 1) reasoning beyond appearance for vision and language tasks (<a href="https://huggingface.co/tasks/visual-question-answering" rel="nofollow external" class="bo">VQA</a>, <a href="https://huggingface.co/docs/transformers/main/en/tasks/image_captioning" rel="nofollow external" class="bo">captioning</a>, <a href="https://paperswithcode.com/task/text-to-image-generation" rel="nofollow external" class="bo">T2I</a>, etc.), and addressing their evaluation misalignment, through 2) reasoning about implicit properties, to 3) their roles in a Robotic visual concept learning framework. The talk will also feature the Active Perception Group (APG)'s projects addressing emerging challenges of the nation in automated mobility and intelligent transportation domains, at the ASU School of Computing and Augmented Intelligence (SCAI).</div><div><br></div><div><a href="https://yezhouyang.engineering.asu.edu/" rel="nofollow external" class="bo">Yezhou (YZ) Yang</a> is an Associate Professor and a Fulton Entrepreneurial Professor in the School of Computing and Augmented Intelligence (SCAI) at Arizona State University. He founded and directs the ASU Active Perception Group, and currently serves as the topic lead (situation awareness) at the Institute of Automated Mobility, Arizona Commerce Authority. He is also a thrust lead (AVAI) at Advanced Communications Technologies (ACT, a Science and Technology Center under the New Economy Initiative, Arizona). His work includes exploring visual primitives and representation learning in visual (and language) understanding, grounding them by natural language and high-level reasoning over the primitives for intelligent systems, secure/robust AI, and V&amp;L model evaluation alignment. Yang is a recipient of the Qualcomm Innovation Fellowship 2011, the NSF CAREER award 2018, and the Amazon AWS Machine Learning Research Award 2019. He received his Ph.D. from the University of Maryland at College Park, and B.E. from Zhejiang University, China. He is a co- founder of ARGOS Vision Inc, an ASU spin-off company.</div><div><br></div><div>The Advances in Perception, Prediction, and Reasoning (PPR) talks are organized and hosted by UMBC Professor <a href="https://www.tejasgokhale.com/" rel="nofollow external" class="bo">Tejas Gokhale</a>.</div></div>
]]>
</Body>
<Summary>PPR Distinguished Speaker     Visual Concept Learning Beyond Appearances: Modernizing a Couple of Classic Ideas     Dr. Yezhou Yang  Arizona State University     3:30-4:45 pm ET, Thur. Feb. 8,...</Summary>
<TrackingUrl>https://beta.my.umbc.edu/api/v0/pixel/news/138703/guest@my.umbc.edu/0f2c1af450698556df0f884e5a8e1a26/api/pixel</TrackingUrl>
<Tag>ai</Tag>
<Tag>computer-vision</Tag>
<Tag>talk</Tag>
<Group token="csee">Computer Science and Electrical Engineering</Group>
<GroupUrl>https://beta.my.umbc.edu/groups/csee</GroupUrl>
<AvatarUrl>https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xsmall.png?1314043393</AvatarUrl>
<AvatarUrl size="original">https://assets1-beta.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/original.png?1314043393</AvatarUrl>
<AvatarUrl size="xxlarge">https://assets1-beta.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xxlarge.png?1314043393</AvatarUrl>
<AvatarUrl size="xlarge">https://assets4-beta.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xlarge.png?1314043393</AvatarUrl>
<AvatarUrl size="large">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/large.png?1314043393</AvatarUrl>
<AvatarUrl size="medium">https://assets1-beta.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/medium.png?1314043393</AvatarUrl>
<AvatarUrl size="small">https://assets2-beta.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/small.png?1314043393</AvatarUrl>
<AvatarUrl size="xsmall">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xsmall.png?1314043393</AvatarUrl>
<AvatarUrl size="xxsmall">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xxsmall.png?1314043393</AvatarUrl>
<Sponsor>Computer Science and Electrical Engineering</Sponsor>
<ThumbnailUrl size="xxlarge">https://assets3-beta.my.umbc.edu/system/shared/thumbnails/news/000/138/703/3ab36af96346b9962d826971827146f7/xxlarge.jpg?1707226179</ThumbnailUrl>
<ThumbnailUrl size="xlarge">https://assets4-beta.my.umbc.edu/system/shared/thumbnails/news/000/138/703/3ab36af96346b9962d826971827146f7/xlarge.jpg?1707226179</ThumbnailUrl>
<ThumbnailUrl size="large">https://assets1-beta.my.umbc.edu/system/shared/thumbnails/news/000/138/703/3ab36af96346b9962d826971827146f7/large.jpg?1707226179</ThumbnailUrl>
<ThumbnailUrl size="medium">https://assets2-beta.my.umbc.edu/system/shared/thumbnails/news/000/138/703/3ab36af96346b9962d826971827146f7/medium.jpg?1707226179</ThumbnailUrl>
<ThumbnailUrl size="small">https://assets1-beta.my.umbc.edu/system/shared/thumbnails/news/000/138/703/3ab36af96346b9962d826971827146f7/small.jpg?1707226179</ThumbnailUrl>
<ThumbnailUrl size="xsmall">https://assets1-beta.my.umbc.edu/system/shared/thumbnails/news/000/138/703/3ab36af96346b9962d826971827146f7/xsmall.jpg?1707226179</ThumbnailUrl>
<ThumbnailUrl size="xxsmall">https://assets4-beta.my.umbc.edu/system/shared/thumbnails/news/000/138/703/3ab36af96346b9962d826971827146f7/xxsmall.jpg?1707226179</ThumbnailUrl>
<PawCount>0</PawCount>
<CommentCount>0</CommentCount>
<CommentsAllowed>true</CommentsAllowed>
<PostedAt>Tue, 06 Feb 2024 08:44:10 -0500</PostedAt>
</NewsItem>

<NewsItem contentIssues="true" id="137288" important="false" status="posted" url="https://beta.my.umbc.edu/groups/csee/posts/137288">
<Title>Talk: Learning Actions from Humans in Video, 4pm Mon. Nov 27</Title>
<Tagline>Modeling &amp; understanding actions is key for computer vision</Tagline>
<Body>
<![CDATA[
    <div class="html-content"><img src="https://www.csee.umbc.edu/wp-content/uploads/sites/659/2023/11/Picture1.png" style="max-width: 100%; height: auto;"><div><span><hr><strong><br></strong></span></div><div><span><strong>Advances in Perception, Prediction, and Reasoning</strong></span></div><div><div><br></div><h4>Learning Actions from Humans in Video</h4><div><br></div><h5>4:00-5:15pm ET, Monday, Nov 27, 2023<br>UMBC, Engineering 231 and via <a href="https://umbc.webex.com/meet/gokhale" rel="nofollow external" class="bo">WebEx</a></h5><div><br></div><h5><a href="https://www.linkedin.com/in/eadom-dessalene-41b08b1b4/" rel="nofollow external" class="bo">Eadom Dessalene</a> <br>University of Maryland, College Park</h5><div><br></div><div>The prevalent computer vision paradigm in the realm of action understanding is to directly transfer advances in object recognition toward action understanding. In this presentation, I discuss the motivations for an alternative embodied approach centered around the modeling of actions rather than objects and survey recent work of ours along these lines, as well as promising possible future directions.</div><div><br></div><div><a href="https://www.linkedin.com/in/eadom-dessalene-41b08b1b4/" rel="nofollow external" class="bo"><strong>Eadom Dessalene</strong></a> is a Ph.D. Candidate at the University of Maryland, College Park, advised by Yiannis Aloimonos and Cornelia Fermuller in the Perception and Robotics Group. Eadom received his bachelor's degree in Computer Science from George Mason University. He has made several important contributions to research on video understanding, ego-centric vision, and action understanding through publications in CVPR, ICLR, T-PAMI, and ICRA, as well as winning first place in the <a href="https://www.cs.umd.edu/article/2020/07/cs-team-wins-epic-kitchen-action-anticipation-challenge" rel="nofollow external" class="bo">2020 EPIC Kitchens Action Anticipation Challenge</a>.</div><div><br></div><div>The <a href="https://www.tejasgokhale.com/seminar.html" rel="nofollow external" class="bo">Advances in Perception, Prediction, and Reasoning </a>(PPR) talks are organized and hosted by UMBC Professor <a href="https://www.tejasgokhale.com/" rel="nofollow external" class="bo">Tejas Gokhale</a>.</div></div><div><br></div></div>
]]>
</Body>
<Summary>Advances in Perception, Prediction, and Reasoning      Learning Actions from Humans in Video     4:00-5:15pm ET, Monday, Nov 27, 2023 UMBC, Engineering 231 and via WebEx     Eadom Dessalene ...</Summary>
<Website>https://www.tejasgokhale.com/seminar.html</Website>
<TrackingUrl>https://beta.my.umbc.edu/api/v0/pixel/news/137288/guest@my.umbc.edu/bff3b3da5a23f897e487d43b3f3dc333/api/pixel</TrackingUrl>
<Tag>actions</Tag>
<Tag>ai</Tag>
<Tag>computer-vision</Tag>
<Tag>talk</Tag>
<Group token="csee">Computer Science and Electrical Engineering</Group>
<GroupUrl>https://beta.my.umbc.edu/groups/csee</GroupUrl>
<AvatarUrl>https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xsmall.png?1314043393</AvatarUrl>
<AvatarUrl size="original">https://assets1-beta.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/original.png?1314043393</AvatarUrl>
<AvatarUrl size="xxlarge">https://assets1-beta.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xxlarge.png?1314043393</AvatarUrl>
<AvatarUrl size="xlarge">https://assets4-beta.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xlarge.png?1314043393</AvatarUrl>
<AvatarUrl size="large">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/large.png?1314043393</AvatarUrl>
<AvatarUrl size="medium">https://assets1-beta.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/medium.png?1314043393</AvatarUrl>
<AvatarUrl size="small">https://assets2-beta.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/small.png?1314043393</AvatarUrl>
<AvatarUrl size="xsmall">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xsmall.png?1314043393</AvatarUrl>
<AvatarUrl size="xxsmall">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/000/099/d117dca133c64bf78a4b7696dd007189/xxsmall.png?1314043393</AvatarUrl>
<Sponsor>Computer Science and Electrical Engineering</Sponsor>
<PawCount>1</PawCount>
<CommentCount>0</CommentCount>
<CommentsAllowed>true</CommentsAllowed>
<PostedAt>Sun, 26 Nov 2023 19:29:10 -0500</PostedAt>
</NewsItem>

</News>
