<?xml version="1.0"?>
<News hasArchived="false" page="1" pageCount="2" pageSize="10" timestamp="Thu, 30 Apr 2026 10:26:46 -0400" url="https://beta.my.umbc.edu/groups/umbc-ai/posts.xml?tag=nlp">
<NewsItem contentIssues="false" id="154576" important="false" status="posted" url="https://beta.my.umbc.edu/groups/umbc-ai/posts/154576">
<Title>Talk: Wikipedia from the World: Grounded Articles from Any Source, 11/24</Title>
<Tagline>4-5:15 pm EST Monday, Nov. 24, 2025 in ITE 229 &amp; Online</Tagline>
<Body>
<![CDATA[
    <div class="html-content"><h5>Wikipedia from the World: Grounded Articles from Any Source</h5><h5>Alexander Martin, JHU</h5><div><strong>4-5:15pm EST Monday, Nov. 24 in ITE229 and</strong> <a href="https://meet.google.com/dgs-edxk-cfq" rel="nofollow external" class="bo"><strong>online</strong></a></div><div><br></div><div>Whether tracking emerging events, analyzing economic trends, or understanding public discourse, valuable information is scattered across modalities, from professionally produced news content and curated Wikipedia articles to firsthand footage of disasters livestreamed on social media. Building systems that can effectively retrieve, reason over, and synthesize these heterogeneous information sources is essential for knowledge-intensive applications.</div><div><br></div><div>This talk will focus on advancing both sides of the information-seeking pipeline: retrieving relevant multimodal evidence at scale, and synthesizing that evidence into coherent, Wikipedia-style explanations grounded in verifiable evidence. For retrieval, we will focus on recent progress in large-scale <a href="https://www.amazon.science/blog/using-generative-ai-to-do-multimodal-information-retrieval" rel="nofollow external" class="bo">multimodal retrieval</a>, including new dataset, efficient and scalable first-stage retrieves, and reasoning reranking. In Wikipedia-style article generation, we will cover benchmarking and evaluating multimodal article generation and a method for enabling the use of <a href="https://www.nvidia.com/en-us/glossary/vision-language-models/" rel="nofollow external" class="bo">VLMs</a> for high-level reasoning. Together, these components outline a path toward unified systems capable of transforming large collections of multimodal evidence into verifiable, human-readable articles.</div><div><br></div><div><a href="https://alexmartin1722.github.io/" rel="nofollow external" class="bo"><strong>Alexander Martin</strong></a> is a PhD candidate at Johns Hopkins University’s Center for Language and Speech Processing (<a href="https://www.clsp.jhu.edu/" rel="nofollow external" class="bo">CLSP</a>) and Human Language Technology Center of Excellence (<a href="https://hltcoe.jhu.edu/" rel="nofollow external" class="bo">HLTCOE</a>). He is advised by Dr. Benjamin Van Durme. Alex’s research focuses on end-to-end multimodal information retrieval and reasoning. His work aims to produce Wikipedia-style articles, grounded in retrieved documents and videos, in response to information seeking queries. His research has been published in CVPR, ACL, NAACL, and EMNLP. Alex is a recipient of the NSF’s Graduate Research Fellowship.</div><div><br></div><div>Hosted by Prof. <a href="https://www.tejasgokhale.com/" rel="nofollow external" class="bo">Tejas Gokhale</a> at UMBC ITE 229 and <a href="https://meet.google.com/dgs-edxk-cfq" rel="nofollow external" class="bo">online</a>.</div></div>
]]>
</Body>
<Summary>Wikipedia from the World: Grounded Articles from Any Source  Alexander Martin, JHU  4-5:15pm EST Monday, Nov. 24 in ITE229 and online     Whether tracking emerging events, analyzing economic...</Summary>
<Website>https://www.tejasgokhale.com/seminar.html</Website>
<TrackingUrl>https://beta.my.umbc.edu/api/v0/pixel/news/154576/guest@my.umbc.edu/44a9d69937b5e480a80dbfeebb8f361b/api/pixel</TrackingUrl>
<Tag>genai</Tag>
<Tag>information-retrieval</Tag>
<Tag>llm</Tag>
<Tag>multimodal</Tag>
<Tag>nlp</Tag>
<Tag>talk</Tag>
<Tag>vlm</Tag>
<Tag>wikipedia</Tag>
<Group token="umbc-ai">UMBC AI</Group>
<GroupUrl>https://beta.my.umbc.edu/groups/umbc-ai</GroupUrl>
<AvatarUrl>https://assets4-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xsmall.png?1691095779</AvatarUrl>
<AvatarUrl size="original">https://assets2-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/original.png?1691095779</AvatarUrl>
<AvatarUrl size="xxlarge">https://assets1-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xxlarge.png?1691095779</AvatarUrl>
<AvatarUrl size="xlarge">https://assets1-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xlarge.png?1691095779</AvatarUrl>
<AvatarUrl size="large">https://assets1-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/large.png?1691095779</AvatarUrl>
<AvatarUrl size="medium">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/medium.png?1691095779</AvatarUrl>
<AvatarUrl size="small">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/small.png?1691095779</AvatarUrl>
<AvatarUrl size="xsmall">https://assets4-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xsmall.png?1691095779</AvatarUrl>
<AvatarUrl size="xxsmall">https://assets1-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xxsmall.png?1691095779</AvatarUrl>
<Sponsor>UMBC</Sponsor>
<ThumbnailUrl size="xxlarge">https://assets4-beta.my.umbc.edu/system/shared/thumbnails/news/000/154/576/737bb9b992cfe9a9f955d8e5875b7ebd/xxlarge.jpg?1763330590</ThumbnailUrl>
<ThumbnailUrl size="xlarge">https://assets4-beta.my.umbc.edu/system/shared/thumbnails/news/000/154/576/737bb9b992cfe9a9f955d8e5875b7ebd/xlarge.jpg?1763330590</ThumbnailUrl>
<ThumbnailUrl size="large">https://assets2-beta.my.umbc.edu/system/shared/thumbnails/news/000/154/576/737bb9b992cfe9a9f955d8e5875b7ebd/large.jpg?1763330590</ThumbnailUrl>
<ThumbnailUrl size="medium">https://assets4-beta.my.umbc.edu/system/shared/thumbnails/news/000/154/576/737bb9b992cfe9a9f955d8e5875b7ebd/medium.jpg?1763330590</ThumbnailUrl>
<ThumbnailUrl size="small">https://assets3-beta.my.umbc.edu/system/shared/thumbnails/news/000/154/576/737bb9b992cfe9a9f955d8e5875b7ebd/small.jpg?1763330590</ThumbnailUrl>
<ThumbnailUrl size="xsmall">https://assets4-beta.my.umbc.edu/system/shared/thumbnails/news/000/154/576/737bb9b992cfe9a9f955d8e5875b7ebd/xsmall.jpg?1763330590</ThumbnailUrl>
<ThumbnailUrl size="xxsmall">https://assets4-beta.my.umbc.edu/system/shared/thumbnails/news/000/154/576/737bb9b992cfe9a9f955d8e5875b7ebd/xxsmall.jpg?1763330590</ThumbnailUrl>
<ThumbnailAltText>multimodal information retrieval</ThumbnailAltText>
<PawCount>0</PawCount>
<CommentCount>0</CommentCount>
<CommentsAllowed>true</CommentsAllowed>
<PostedAt>Mon, 17 Nov 2025 08:14:06 -0500</PostedAt>
</NewsItem>

<NewsItem contentIssues="false" id="153614" important="false" status="posted" url="https://beta.my.umbc.edu/groups/umbc-ai/posts/153614">
<Title>Talk: Inductive Analysis of Texts with Embeddings, 11/5</Title>
<Tagline>12-1:30pm Wednesday, November 5, 2025, Commons 329</Tagline>
<Body>
<![CDATA[
    <div class="html-content"><h4>Inductive Analysis of Texts with Embeddings</h4><h4>Prof. <a href="https://my3.my.umbc.edu/groups/csss/events/145563/a6c/807b1717112ab90ff4d8fdacb5d94cce/web/link?link=https%3A%2F%2Fwww.dustinstoltz.com%2F" rel="nofollow external" class="bo">Dustin Stoltz</a>, Lehigh University</h4><p>Word or text <a href="https://en.wikipedia.org/wiki/Word_embedding" rel="nofollow external" class="bo"><strong>embeddings</strong></a> are a central component in modern language models, including those powering generative AI. Embeddings represent word meanings as positions in space, where words that are closer together are used in similar contexts or evoke similar concepts -- even if those words never actually co-occur. We navigate the meaning space created by embeddings directly using basic arithmetic, and in doing so, explore how meaning changes overtime or how meaning differs between different collections of texts. </p><p><a href="https://my3.my.umbc.edu/groups/csss/events/145563/a6c/807b1717112ab90ff4d8fdacb5d94cce/web/link?link=https%3A%2F%2Fwww.dustinstoltz.com%2F" rel="nofollow external" class="bo"><strong>Dustin Stoltz</strong></a> is an Assistant Professor of Sociology and Cognitive Science at Lehigh University.  He studies a variety of topics in cultural and economic sociology and specializes in computational methods.  Five copies of his recently published book, <a href="https://global.oup.com/academic/product/mapping-texts-9780197756881?cc=us&amp;lang=en&amp;" rel="nofollow external" class="bo">Mapping Texts: Computational Text Analysis for the Social Sciences</a> (coauthored with Marshall Taylor), will be raffled off to workshop registrants.  </p><h4><a href="https://my3.my.umbc.edu/groups/csss/events/145563" rel="nofollow external" class="bo">Register here.</a></h4><p>Lunch will be provided for registered attendees.</p><p>Hosted by the Center for Social Science Scholarship and cosponsored by the Departments of English; Sociology, Anthropology, &amp; Public Health; Modern Languages, Linguistics, &amp; Intercultural Communication; the Division of Information Technology; the Center for Scalable Data and Computational Science; and CGC-SCIPE.</p></div>
]]>
</Body>
<Summary>Inductive Analysis of Texts with Embeddings  Prof. Dustin Stoltz, Lehigh University  Word or text embeddings are a central component in modern language models, including those powering generative...</Summary>
<Website>https://my3.my.umbc.edu/groups/csss/events/145563</Website>
<TrackingUrl>https://beta.my.umbc.edu/api/v0/pixel/news/153614/guest@my.umbc.edu/26fc0b112f8765b28d76511cf407fd11/api/pixel</TrackingUrl>
<Tag>ai</Tag>
<Tag>analysis</Tag>
<Tag>embeddings</Tag>
<Tag>nlp</Tag>
<Tag>text</Tag>
<Group token="umbc-ai">UMBC AI</Group>
<GroupUrl>https://beta.my.umbc.edu/groups/umbc-ai</GroupUrl>
<AvatarUrl>https://assets4-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xsmall.png?1691095779</AvatarUrl>
<AvatarUrl size="original">https://assets2-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/original.png?1691095779</AvatarUrl>
<AvatarUrl size="xxlarge">https://assets1-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xxlarge.png?1691095779</AvatarUrl>
<AvatarUrl size="xlarge">https://assets1-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xlarge.png?1691095779</AvatarUrl>
<AvatarUrl size="large">https://assets1-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/large.png?1691095779</AvatarUrl>
<AvatarUrl size="medium">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/medium.png?1691095779</AvatarUrl>
<AvatarUrl size="small">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/small.png?1691095779</AvatarUrl>
<AvatarUrl size="xsmall">https://assets4-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xsmall.png?1691095779</AvatarUrl>
<AvatarUrl size="xxsmall">https://assets1-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xxsmall.png?1691095779</AvatarUrl>
<Sponsor>UMBC AI</Sponsor>
<ThumbnailUrl size="xxlarge">https://assets2-beta.my.umbc.edu/system/shared/thumbnails/news/000/153/614/9fbba467d186f849f1710ebc9e61c82f/xxlarge.jpg?1760577891</ThumbnailUrl>
<ThumbnailUrl size="xlarge">https://assets3-beta.my.umbc.edu/system/shared/thumbnails/news/000/153/614/9fbba467d186f849f1710ebc9e61c82f/xlarge.jpg?1760577891</ThumbnailUrl>
<ThumbnailUrl size="large">https://assets1-beta.my.umbc.edu/system/shared/thumbnails/news/000/153/614/9fbba467d186f849f1710ebc9e61c82f/large.jpg?1760577891</ThumbnailUrl>
<ThumbnailUrl size="medium">https://assets3-beta.my.umbc.edu/system/shared/thumbnails/news/000/153/614/9fbba467d186f849f1710ebc9e61c82f/medium.jpg?1760577891</ThumbnailUrl>
<ThumbnailUrl size="small">https://assets1-beta.my.umbc.edu/system/shared/thumbnails/news/000/153/614/9fbba467d186f849f1710ebc9e61c82f/small.jpg?1760577891</ThumbnailUrl>
<ThumbnailUrl size="xsmall">https://assets3-beta.my.umbc.edu/system/shared/thumbnails/news/000/153/614/9fbba467d186f849f1710ebc9e61c82f/xsmall.jpg?1760577891</ThumbnailUrl>
<ThumbnailUrl size="xxsmall">https://assets3-beta.my.umbc.edu/system/shared/thumbnails/news/000/153/614/9fbba467d186f849f1710ebc9e61c82f/xxsmall.jpg?1760577891</ThumbnailUrl>
<ThumbnailAltText>Image of Dustin Stoltz, an Assistant Professor of Sociology and Cognitive Science at Lehigh University.</ThumbnailAltText>
<PawCount>0</PawCount>
<CommentCount>0</CommentCount>
<CommentsAllowed>true</CommentsAllowed>
<PostedAt>Thu, 16 Oct 2025 12:31:41 -0400</PostedAt>
<EditAt>Sat, 18 Oct 2025 10:03:00 -0400</EditAt>
</NewsItem>

<NewsItem contentIssues="false" id="153403" important="false" status="posted" url="https://beta.my.umbc.edu/groups/umbc-ai/posts/153403">
<Title>Talk: Towards Multilingual Evaluations of Knowledge for LLMs</Title>
<Tagline>2-3pm EDT Tue., Oct. 14, 2025, ITE 325b, UMBC</Tagline>
<Body>
<![CDATA[
    <div class="html-content"><h5>Language Technology Seminar Series (LaTeSS)</h5><h4>Towards Multilingual Evaluations of Knowledge for Large Language Models</h4><h5>Bryan Li, University of Pennsylvania<br>2-3pm Tue., Oct. 14, 2025, ITE 325b, UMBC</h5><div>Contemporary language models (LMs) support dozens of languages, promising to broaden information access for global users. However, existing multilingual evaluations largely study factual recall tasks, failing to address knowledge-intensive tasks shaped by the uneven coverage and different perspectives of knowledge across languages. This dissertation investigates how LMs handle such tasks by examining their internal parametric knowledge and their use of externally-provided contextual knowledge. In the first part, I introduce benchmarks for complex reasoning and territorial disputes, and find that LM responses on both tasks exhibit a lack of cross-lingual robustness, outputting inconsistent answers to underlying queries written in different languages. I then show that lightweight methods of leveraging program code and persona-based prompting can mitigate these issues.</div><div><br></div><div>In the second part, I explore the retrieval-augmented generation (RAG) setting, which combines LM's internal parametric knowledge with contextual knowledge from external knowledge bases (KBs). Focusing on the territorial disputes task, I show that while RAG over single-language or single-source KBs has mixed effects on robustness, retrieving over multilingual and multi-source KBs — Wikipedia, as well as a large-scale dataset of state media articles I collected — substantially boosts robustness. Together, these findings highlight the need for LMs that can navigate, and assist users in navigating, the real-world distribution of knowledge across languages and sources. This is a practice dissertation talk, and your feedback would be greatly appreciated!</div><div><br></div><div><a href="https://manestay.github.io/" rel="nofollow external" class="bo"><strong>Bryan Li </strong></a>is a final-year PhD student at the University of Pennsylvania, advised by Prof. Chris Callison-Burch. His research focuses on multilingual evaluations of LLMs, spanning both the fields of natural language processing and computational social science. His work has appeared in conferences such as ACL, COLM, and ICLR. Outside of research, you can find him in a trendy cafe, a river-side running trail, or at home listening to a good podcast.</div></div>
]]>
</Body>
<Summary>Language Technology Seminar Series (LaTeSS)  Towards Multilingual Evaluations of Knowledge for Large Language Models  Bryan Li, University of Pennsylvania 2-3pm Tue., Oct. 14, 2025, ITE 325b, UMBC...</Summary>
<Website>https://laramartin.net/LaTeSS</Website>
<TrackingUrl>https://beta.my.umbc.edu/api/v0/pixel/news/153403/guest@my.umbc.edu/62abef58462a166d634b00ce2adf44ee/api/pixel</TrackingUrl>
<Tag>language-model</Tag>
<Tag>llm</Tag>
<Tag>multilingual</Tag>
<Tag>nlp</Tag>
<Tag>rag</Tag>
<Group token="umbc-ai">UMBC AI</Group>
<GroupUrl>https://beta.my.umbc.edu/groups/umbc-ai</GroupUrl>
<AvatarUrl>https://assets4-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xsmall.png?1691095779</AvatarUrl>
<AvatarUrl size="original">https://assets2-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/original.png?1691095779</AvatarUrl>
<AvatarUrl size="xxlarge">https://assets1-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xxlarge.png?1691095779</AvatarUrl>
<AvatarUrl size="xlarge">https://assets1-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xlarge.png?1691095779</AvatarUrl>
<AvatarUrl size="large">https://assets1-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/large.png?1691095779</AvatarUrl>
<AvatarUrl size="medium">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/medium.png?1691095779</AvatarUrl>
<AvatarUrl size="small">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/small.png?1691095779</AvatarUrl>
<AvatarUrl size="xsmall">https://assets4-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xsmall.png?1691095779</AvatarUrl>
<AvatarUrl size="xxsmall">https://assets1-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xxsmall.png?1691095779</AvatarUrl>
<Sponsor>UMBC Language, Aid, and Representation AI Lab</Sponsor>
<ThumbnailUrl size="xxlarge">https://assets1-beta.my.umbc.edu/system/shared/thumbnails/news/000/153/403/e97b7ff3606ac8b1622433fd30815d10/xxlarge.jpg?1759949298</ThumbnailUrl>
<ThumbnailUrl size="xlarge">https://assets4-beta.my.umbc.edu/system/shared/thumbnails/news/000/153/403/e97b7ff3606ac8b1622433fd30815d10/xlarge.jpg?1759949298</ThumbnailUrl>
<ThumbnailUrl size="large">https://assets2-beta.my.umbc.edu/system/shared/thumbnails/news/000/153/403/e97b7ff3606ac8b1622433fd30815d10/large.jpg?1759949298</ThumbnailUrl>
<ThumbnailUrl size="medium">https://assets2-beta.my.umbc.edu/system/shared/thumbnails/news/000/153/403/e97b7ff3606ac8b1622433fd30815d10/medium.jpg?1759949298</ThumbnailUrl>
<ThumbnailUrl size="small">https://assets1-beta.my.umbc.edu/system/shared/thumbnails/news/000/153/403/e97b7ff3606ac8b1622433fd30815d10/small.jpg?1759949298</ThumbnailUrl>
<ThumbnailUrl size="xsmall">https://assets1-beta.my.umbc.edu/system/shared/thumbnails/news/000/153/403/e97b7ff3606ac8b1622433fd30815d10/xsmall.jpg?1759949298</ThumbnailUrl>
<ThumbnailUrl size="xxsmall">https://assets1-beta.my.umbc.edu/system/shared/thumbnails/news/000/153/403/e97b7ff3606ac8b1622433fd30815d10/xxsmall.jpg?1759949298</ThumbnailUrl>
<ThumbnailAltText>Bryan Li observing a crash between a vehicle and baloon</ThumbnailAltText>
<PawCount>0</PawCount>
<CommentCount>0</CommentCount>
<CommentsAllowed>true</CommentsAllowed>
<PostedAt>Wed, 08 Oct 2025 15:07:02 -0400</PostedAt>
</NewsItem>

<NewsItem contentIssues="false" id="151217" important="false" status="posted" url="https://beta.my.umbc.edu/groups/umbc-ai/posts/151217">
<Title>UMBC PhD student Ommo Clark wins best paper award with Karuna Joshi</Title>
<Tagline>Detecting misinformation with LLMs and Knowledge Graphs</Tagline>
<Body>
<![CDATA[
    <div class="html-content">A research paper by UMBC Information Systems PhD student <a href="https://knacc.umbc.edu/people/students/ommo-clark/" rel="nofollow external" class="bo"><strong>Ommo Clark</strong></a> co-authored with her advisor Professor <a href="https://knacc.umbc.edu/karuna-pande-joshi/" rel="nofollow external" class="bo"><strong>Karuna Joshi</strong></a> received the Best Student Paper award at the <a href="https://services.conferences.computer.org/2025/icdh/" rel="nofollow external" class="bo"><strong>IEEE International Conference on Digital Health</strong></a> held earlier this month in Helsinki as part of the IEEE Services Congress.<div><br></div><div>The paper addressed the problem of identifying <span>health misinformation on social media platforms, which </span><span>poses a threat to public health by contributing to hesitancy in vaccines, delayed medical interventions, and the adoption of untested or harmful treatments. </span></div><div><br></div><div>Clark and Joshi evaluated their hybrid approach to combining LLM and knowledge-graph technologies on a dataset of Reddit posts discussing chronic health conditions, and showed the benefits compared to models that only use text or knowledge-graphs. Their paper, <strong>Real-Time Detection of Online Health Misinformation using an Integrated KnowledgeGraph-LLM Approach</strong>, is available <a href="https://ebiquity.umbc.edu/paper/html/id/1193/Real-Time-Detection-of-Online-Health-Misinformation-using-an-Integrated-Knowledgegraph-LLM-Approach" rel="nofollow external" class="bo"><strong>here</strong></a>.</div><div><br></div><div><br></div></div>
]]>
</Body>
<Summary>A research paper by UMBC Information Systems PhD student Ommo Clark co-authored with her advisor Professor Karuna Joshi received the Best Student Paper award at the IEEE International Conference...</Summary>
<Website>https://ebiquity.umbc.edu/paper/html/id/1193/Real-Time-Detection-of-Online-Health-Misinformation-using-an-Integrated-Knowledgegraph-LLM-Approach</Website>
<TrackingUrl>https://beta.my.umbc.edu/api/v0/pixel/news/151217/guest@my.umbc.edu/87be4179805dd18174504aefdd4b88c1/api/pixel</TrackingUrl>
<Tag>ai</Tag>
<Tag>knowledge-graph</Tag>
<Tag>misinformation</Tag>
<Tag>nlp</Tag>
<Group token="umbc-ai">UMBC AI</Group>
<GroupUrl>https://beta.my.umbc.edu/groups/umbc-ai</GroupUrl>
<AvatarUrl>https://assets4-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xsmall.png?1691095779</AvatarUrl>
<AvatarUrl size="original">https://assets2-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/original.png?1691095779</AvatarUrl>
<AvatarUrl size="xxlarge">https://assets1-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xxlarge.png?1691095779</AvatarUrl>
<AvatarUrl size="xlarge">https://assets1-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xlarge.png?1691095779</AvatarUrl>
<AvatarUrl size="large">https://assets1-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/large.png?1691095779</AvatarUrl>
<AvatarUrl size="medium">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/medium.png?1691095779</AvatarUrl>
<AvatarUrl size="small">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/small.png?1691095779</AvatarUrl>
<AvatarUrl size="xsmall">https://assets4-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xsmall.png?1691095779</AvatarUrl>
<AvatarUrl size="xxsmall">https://assets1-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xxsmall.png?1691095779</AvatarUrl>
<Sponsor>UMBC AI</Sponsor>
<ThumbnailUrl size="xxlarge">https://assets3-beta.my.umbc.edu/system/shared/thumbnails/news/000/151/217/56a5e670afb49155d0a9e35a1f91f67e/xxlarge.jpg?1753912567</ThumbnailUrl>
<ThumbnailUrl size="xlarge">https://assets3-beta.my.umbc.edu/system/shared/thumbnails/news/000/151/217/56a5e670afb49155d0a9e35a1f91f67e/xlarge.jpg?1753912567</ThumbnailUrl>
<ThumbnailUrl size="large">https://assets3-beta.my.umbc.edu/system/shared/thumbnails/news/000/151/217/56a5e670afb49155d0a9e35a1f91f67e/large.jpg?1753912567</ThumbnailUrl>
<ThumbnailUrl size="medium">https://assets1-beta.my.umbc.edu/system/shared/thumbnails/news/000/151/217/56a5e670afb49155d0a9e35a1f91f67e/medium.jpg?1753912567</ThumbnailUrl>
<ThumbnailUrl size="small">https://assets4-beta.my.umbc.edu/system/shared/thumbnails/news/000/151/217/56a5e670afb49155d0a9e35a1f91f67e/small.jpg?1753912567</ThumbnailUrl>
<ThumbnailUrl size="xsmall">https://assets2-beta.my.umbc.edu/system/shared/thumbnails/news/000/151/217/56a5e670afb49155d0a9e35a1f91f67e/xsmall.jpg?1753912567</ThumbnailUrl>
<ThumbnailUrl size="xxsmall">https://assets2-beta.my.umbc.edu/system/shared/thumbnails/news/000/151/217/56a5e670afb49155d0a9e35a1f91f67e/xxsmall.jpg?1753912567</ThumbnailUrl>
<ThumbnailAltText>Paper by Ommo Clark and Karuna Joshi receives award</ThumbnailAltText>
<PawCount>0</PawCount>
<CommentCount>0</CommentCount>
<CommentsAllowed>true</CommentsAllowed>
<PostedAt>Wed, 30 Jul 2025 18:20:20 -0400</PostedAt>
</NewsItem>

<NewsItem contentIssues="true" id="148186" important="false" status="posted" url="https://beta.my.umbc.edu/groups/umbc-ai/posts/148186">
<Title>Tutorial on NeuroSymbolic AI applied to NLP</Title>
<Tagline>Material from the AAAI 2025 tutorial</Tagline>
<Body>
<![CDATA[
    <div class="html-content"><img src="https://ai.umbc.edu/wp-content/uploads/sites/734/2025/03/tutorial.png" style="max-width: 100%; height: auto;"><div><br></div><div><div><a href="https://en.wikipedia.org/wiki/Large_language_model" rel="nofollow external" class="bo"><strong>Large Language Models</strong></a> are transforming natural language processing tasks in multiple domains in many ways. Despite their capabilities, their real-world adoption is often limited by issues like the lack of transparency, inadequate understanding of domain protocols, and subpar precision. </div><div><br></div><div><a href="https://manasgaur.github.io/" rel="nofollow external" class="bo"><strong>Manas Gaur</strong></a>, <a href="https://www.edwardraff.com/" rel="nofollow external" class="bo"><strong>Ed Raff</strong></a>, and <a href="https://mohammadi-ali.github.io/" rel="nofollow external" class="bo"><strong>Ali Mohammadi</strong></a> were part of the team that organized and presented a half-day tutorial at the 2025 AAAI Conference last month covering the concept of <a href="https://en.wikipedia.org/wiki/Neuro-symbolic_AI" rel="nofollow external" class="bo"><strong>Neurosymbolic AI </strong></a>and how it can be applied to LLMs to help solve key challenges in NLP tasks like explainability, grounding, and instructability.</div><div><br></div><div>You can see their slides and other material <a href="https://nesy-egi.github.io/" rel="nofollow external" class="bo"><strong>here</strong></a>.</div></div><div><br></div>
    
    <hr><a href="https://ai.umbc.edu/" rel="nofollow external" class="bo"><strong>UMBC Center for AI</strong></a></div>
]]>
</Body>
<Summary>Large Language Models are transforming natural language processing tasks in multiple domains in many ways. Despite their capabilities, their real-world adoption is often limited by issues like the...</Summary>
<Website>https://nesy-egi.github.io/</Website>
<TrackingUrl>https://beta.my.umbc.edu/api/v0/pixel/news/148186/guest@my.umbc.edu/2c3ececc48c719a462b3d72dabdc0413/api/pixel</TrackingUrl>
<Tag>ai</Tag>
<Tag>llm</Tag>
<Tag>nlp</Tag>
<Tag>tutorial</Tag>
<Group token="umbc-ai">UMBC AI</Group>
<GroupUrl>https://beta.my.umbc.edu/groups/umbc-ai</GroupUrl>
<AvatarUrl>https://assets4-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xsmall.png?1691095779</AvatarUrl>
<AvatarUrl size="original">https://assets2-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/original.png?1691095779</AvatarUrl>
<AvatarUrl size="xxlarge">https://assets1-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xxlarge.png?1691095779</AvatarUrl>
<AvatarUrl size="xlarge">https://assets1-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xlarge.png?1691095779</AvatarUrl>
<AvatarUrl size="large">https://assets1-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/large.png?1691095779</AvatarUrl>
<AvatarUrl size="medium">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/medium.png?1691095779</AvatarUrl>
<AvatarUrl size="small">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/small.png?1691095779</AvatarUrl>
<AvatarUrl size="xsmall">https://assets4-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xsmall.png?1691095779</AvatarUrl>
<AvatarUrl size="xxsmall">https://assets1-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xxsmall.png?1691095779</AvatarUrl>
<Sponsor>UMBC AI</Sponsor>
<PawCount>0</PawCount>
<CommentCount>0</CommentCount>
<CommentsAllowed>true</CommentsAllowed>
<PostedAt>Fri, 21 Mar 2025 10:18:25 -0400</PostedAt>
<EditAt>Fri, 21 Mar 2025 10:41:51 -0400</EditAt>
</NewsItem>

<NewsItem contentIssues="false" id="145624" important="false" status="posted" url="https://beta.my.umbc.edu/groups/umbc-ai/posts/145624">
<Title>Talk: Using Participatory Design and AI to Create Agency-increasing Augmentative and Alternative Communication Systems</Title>
<Tagline>3-4pm ET Mon., Nov. 18, 2024 in ITE 406 &amp; online</Tagline>
<Body>
<![CDATA[
    <div class="html-content"><h4><strong>Talk: Using Participatory Design and AI to Create Agency-increasing Augmentative and Alternative Communication Systems</strong></h4><div><br></div><h4><strong><a href="https://stephanie-valencia.com/" rel="nofollow external" class="bo">Dr. Stephanie Valencia</a>, Univ. of Maryland<br></strong><strong>3-4pm ET Monday, November 18, 2024<br></strong><strong>ITE 406, UMBC and <a href="https://umbc.webex.com/wbxmjs/joinservice/sites/umbc/meeting/download/6ffaf986d45fc74b41db09df7c78a7d9" rel="nofollow external" class="bo">online</a></strong></h4><div><br></div><div>Agency and communication are integral to personal development, enabling us to pursue and express our goals. However, agency in communication is not fixed–Many individuals who use speech-generating devices to communicate encounter social constraints and technical limitations that can restrict what they can say, how they can say it, and when they can contribute to a discussion. In this talk, I will delve into how an agency-centered design approach can foster more accessible communication experiences and help us uncover opportunities for design. Drawing from empirical research and collaborative co-design with people with disabilities, I will highlight how various technological tools—such as automated transcription, physical interaction artifacts, and AI-driven language generation—can impact conversational agency. Additionally, I will share practical design strategies and discuss existing challenges for co-designing communication technologies that enhance user agency and participation.</div><div><br></div><div><a href="https://stephanie-valencia.com/" rel="nofollow external" class="bo"><strong>Dr. Valencia</strong></a> is dedicated to promoting equitable access to assistive technologies (AT), advocating for open-source hardware, and championing the inclusion of underrepresented groups in technology design and development. Dr. Valencia²’s research endeavors are centered on elevating user agency, accessibility, and enjoyment. Employing participatory design methodologies, she has explored the integration of diverse design elements such as artificial intelligence and embodied expressive objects to empower augmentative and alternative communication users. Dr. Valencia² works on conceptualizing these innovations but also in building and deploying them to make a real-world impact. Rigorous empirical studies are an integral part of her work, ensuring that the efficacy and significance of design contributions are thoroughly assessed. She earned her Ph.D. at the Human-computer Interaction Institute at Carnegie Mellon University.</div><div><br></div>
    <hr><a href="https://ai.umbc.edu/" rel="nofollow external" class="bo"><strong>UMBC Center for AI</strong></a></div>
]]>
</Body>
<Summary>Talk: Using Participatory Design and AI to Create Agency-increasing Augmentative and Alternative Communication Systems     Dr. Stephanie Valencia, Univ. of Maryland 3-4pm ET Monday, November 18,...</Summary>
<Website>https://my3.my.umbc.edu/groups/langtech/events/136186</Website>
<TrackingUrl>https://beta.my.umbc.edu/api/v0/pixel/news/145624/guest@my.umbc.edu/da246d6551d44cb689cad7374ec3aaf5/api/pixel</TrackingUrl>
<Tag>agency</Tag>
<Tag>ai</Tag>
<Tag>nlp</Tag>
<Group token="umbc-ai">UMBC AI</Group>
<GroupUrl>https://beta.my.umbc.edu/groups/umbc-ai</GroupUrl>
<AvatarUrl>https://assets4-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xsmall.png?1691095779</AvatarUrl>
<AvatarUrl size="original">https://assets2-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/original.png?1691095779</AvatarUrl>
<AvatarUrl size="xxlarge">https://assets1-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xxlarge.png?1691095779</AvatarUrl>
<AvatarUrl size="xlarge">https://assets1-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xlarge.png?1691095779</AvatarUrl>
<AvatarUrl size="large">https://assets1-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/large.png?1691095779</AvatarUrl>
<AvatarUrl size="medium">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/medium.png?1691095779</AvatarUrl>
<AvatarUrl size="small">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/small.png?1691095779</AvatarUrl>
<AvatarUrl size="xsmall">https://assets4-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xsmall.png?1691095779</AvatarUrl>
<AvatarUrl size="xxsmall">https://assets1-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xxsmall.png?1691095779</AvatarUrl>
<Sponsor>LARA Lab and Interactive Systems Research Center</Sponsor>
<PawCount>0</PawCount>
<CommentCount>0</CommentCount>
<CommentsAllowed>true</CommentsAllowed>
<PostedAt>Wed, 13 Nov 2024 21:48:26 -0500</PostedAt>
<EditAt>Wed, 13 Nov 2024 21:55:47 -0500</EditAt>
</NewsItem>

<NewsItem contentIssues="true" id="145547" important="false" status="posted" url="https://beta.my.umbc.edu/groups/umbc-ai/posts/145547">
<Title>talk: Confabulation: What Could LLM Hallucinations Do For Storytelling? 11/14</Title>
<Tagline>11:30-12:50 Thur. Nov. 14, 2024, Sondheim Hall 110 &amp; online</Tagline>
<Body>
<![CDATA[
    <div class="html-content"><div><img src="https://ai.umbc.edu/wp-content/uploads/sites/734/2024/11/Patrick_Sui2-1.jpg" style="max-width: 100%; height: auto;"></div><div><br></div><div><a href="https://www.linkedin.com/in/peiqi-sui-4ba977282/" rel="nofollow external" class="bo">Peiqi "Patrick" Sui</a> will talk on <strong>Confabulation: What Could LLM Hallucinations Do For Storytelling?</strong>, 11:30am-12:50pm on Thursday, Nov. 14, 2024 in Sondheim Hall 110 at UMBC and <a href="https://my3.my.umbc.edu/groups/langtech/events/136093/join_meeting" rel="nofollow external" class="bo">online</a>. </div><div><br></div><div>Are hallucinations always bad? Most of NLP research presumes a normative stance that they are, but it overlooks the cognitive and communicative affordances of a type of particularly story-like hallucinations (which we'll call confabulations). Consider two general categories of LLM applications: using them as tools, or interacting with them as viable cultural agents. The two have very different training objectives in terms of the tradeoff between factuality and alignment with the human behavior of storytelling, and when it comes to ensuring the latter, LLMs that could effectively confabulate would be especially useful. For instance, confabulations could enable LLMs to perform speculative narration and address omissions in history resulting from social injustice, in the hope of enacting what literary theorist Saidiya Hartman calls "critical fabulation" at scale, and giving interactive storytelling a wider social impact.</div><div><br></div><div><a href="https://www.linkedin.com/in/peiqi-sui-4ba977282/" rel="nofollow external" class="bo"><strong>Patrick Sui</strong></a> is a second-year PhD student in English at McGill University, advised by Richard Jean So. He mainly works in digital humanities and cultural analytics, and spends most of his time thinking about how literary studies could uniquely contribute to AI research about language. His current research topics include benchmarks for close reading &amp; interpretive reasoning, modeling close reading behaviors with information theory, knowledge-grounded style transfer for co-creative systems, AI literacy &amp; writing pedagogy, and all kinds of computational literary theory.</div><div><br></div><div>The talk is part of the UMBC <a href="https://laramartin.net/LaTeSS" rel="nofollow external" class="bo"><strong>Language Technology Seminar Series</strong></a>.</div><div><br></div> <hr><a href="https://ai.umbc.edu/" rel="nofollow external" class="bo"><strong>UMBC Center for AI</strong></a></div>
]]>
</Body>
<Summary>Peiqi "Patrick" Sui will talk on Confabulation: What Could LLM Hallucinations Do For Storytelling?, 11:30am-12:50pm on Thursday, Nov. 14, 2024 in Sondheim Hall 110 at UMBC and online.      Are...</Summary>
<Website>https://laramartin.net/LaTeSS</Website>
<TrackingUrl>https://beta.my.umbc.edu/api/v0/pixel/news/145547/guest@my.umbc.edu/4654d8b43ac207b29c7a91e8e6ca00b8/api/pixel</TrackingUrl>
<Tag>ai</Tag>
<Tag>lara-lab</Tag>
<Tag>llm</Tag>
<Tag>nlp</Tag>
<Tag>storytelling</Tag>
<Group token="umbc-ai">UMBC AI</Group>
<GroupUrl>https://beta.my.umbc.edu/groups/umbc-ai</GroupUrl>
<AvatarUrl>https://assets4-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xsmall.png?1691095779</AvatarUrl>
<AvatarUrl size="original">https://assets2-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/original.png?1691095779</AvatarUrl>
<AvatarUrl size="xxlarge">https://assets1-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xxlarge.png?1691095779</AvatarUrl>
<AvatarUrl size="xlarge">https://assets1-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xlarge.png?1691095779</AvatarUrl>
<AvatarUrl size="large">https://assets1-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/large.png?1691095779</AvatarUrl>
<AvatarUrl size="medium">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/medium.png?1691095779</AvatarUrl>
<AvatarUrl size="small">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/small.png?1691095779</AvatarUrl>
<AvatarUrl size="xsmall">https://assets4-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xsmall.png?1691095779</AvatarUrl>
<AvatarUrl size="xxsmall">https://assets1-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xxsmall.png?1691095779</AvatarUrl>
<Sponsor>Language, Aid, and Representation AI Lab</Sponsor>
<PawCount>0</PawCount>
<CommentCount>0</CommentCount>
<CommentsAllowed>true</CommentsAllowed>
<PostedAt>Mon, 11 Nov 2024 18:08:30 -0500</PostedAt>
<EditAt>Mon, 11 Nov 2024 19:06:30 -0500</EditAt>
</NewsItem>

<NewsItem contentIssues="false" id="144588" important="false" status="posted" url="https://beta.my.umbc.edu/groups/umbc-ai/posts/144588">
<Title>Talk today on AI for Event-Centric Video Retrieval, 1:30pm in ITE 325b</Title>
<Body>
<![CDATA[
    <div class="html-content"><span><span>If you are interested in a challenging AI problem involving integrated spoken language and video understanding, Reno Kriz from JHU will discuss the results of a large summer project focused on finding videos about specific current events. His presentation will be at 1:30 p.m. today (Tuesday, 10/8) in ITE 325b and also online. Register and get more information </span><a href="https://my3.my.umbc.edu/groups/langtech/events/134555" rel="nofollow external" class="bo"><span>here</span></a><span>.</span></span></div>
]]>
</Body>
<Summary>If you are interested in a challenging AI problem involving integrated spoken language and video understanding, Reno Kriz from JHU will discuss the results of a large summer project focused on...</Summary>
<Website>https://my3.my.umbc.edu/groups/langtech/events/134555</Website>
<TrackingUrl>https://beta.my.umbc.edu/api/v0/pixel/news/144588/guest@my.umbc.edu/dfbc935fe3eaaed0ea7269f34ad72685/api/pixel</TrackingUrl>
<Tag>ai</Tag>
<Tag>audio</Tag>
<Tag>nlp</Tag>
<Tag>text</Tag>
<Tag>video</Tag>
<Tag>vision</Tag>
<Group token="umbc-ai">UMBC AI</Group>
<GroupUrl>https://beta.my.umbc.edu/groups/umbc-ai</GroupUrl>
<AvatarUrl>https://assets4-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xsmall.png?1691095779</AvatarUrl>
<AvatarUrl size="original">https://assets2-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/original.png?1691095779</AvatarUrl>
<AvatarUrl size="xxlarge">https://assets1-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xxlarge.png?1691095779</AvatarUrl>
<AvatarUrl size="xlarge">https://assets1-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xlarge.png?1691095779</AvatarUrl>
<AvatarUrl size="large">https://assets1-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/large.png?1691095779</AvatarUrl>
<AvatarUrl size="medium">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/medium.png?1691095779</AvatarUrl>
<AvatarUrl size="small">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/small.png?1691095779</AvatarUrl>
<AvatarUrl size="xsmall">https://assets4-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xsmall.png?1691095779</AvatarUrl>
<AvatarUrl size="xxsmall">https://assets1-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xxsmall.png?1691095779</AvatarUrl>
<Sponsor>Language Technology Seminar Series</Sponsor>
<PawCount>0</PawCount>
<CommentCount>0</CommentCount>
<CommentsAllowed>true</CommentsAllowed>
<PostedAt>Tue, 08 Oct 2024 10:10:00 -0400</PostedAt>
</NewsItem>

<NewsItem contentIssues="false" id="144205" important="false" status="posted" url="https://beta.my.umbc.edu/groups/umbc-ai/posts/144205">
<Title>Talk: Takeaways from the Workshop on Event-Centric Video Retrieval, Oct 8</Title>
<Tagline>Reno Kriz, JHU HLTCOE, 1:30-2:30pm EDT, Tue. Oct. 8</Tagline>
<Body>
<![CDATA[
    <div class="html-content"><span><h4><span>Takeaways from the SCALE 2024 Workshop on Event-Centric Video Retrieval</span></h4><h5><span><strong>Reno Kriz, JHU HLTCOE</strong></span></h5><h5><span><strong>1:30-2:30 pm EDT Tuesday, October 8, 2024<br></strong></span><strong><span>ITE 325b, UMBC and </span><a href="https://my3.my.umbc.edu/groups/langtech/events/134555/join_meeting" rel="nofollow external" class="bo"><span>online</span></a></strong></h5><p><span>Information dissemination for current events has traditionally consisted of professionally collected and produced materials, leading to large collections of well-written news articles and high-quality videos. As a result, most prior work in event analysis and retrieval has focused on leveraging this traditional news content, particularly in English. However, much of the event-centric content today is generated by non-professionals, such as on-the-scene witnesses to events who hastily capture videos and upload them to the internet without further editing; these are challenging to find due to quality variance, as well as a lack of text or speech overlays providing clear descriptions of what is occurring. To address this gap, SCALE 2024, a 10-week research workshop hosted at the Human Language Technology Center of Excellence (HLTCOE), focused on multilingual event-centric video retrieval, or the task of finding videos about specific current events. Around 50 researchers and students participated in this workshop and were split up into five sub-teams. The Infrastructure team focused on developing MultiVENT 2.0, a challenging new video retrieval dataset consisting of 20x more videos than prior work and targeted queries about specific world events across six languages. The other teams worked on improving models from specific modalities, specifically Vision, Optical Character Recognition (OCR), Audio, and Text. Overall, we came away with three primary findings: extracting specific text from a video allows us to take better advantage of powerful methods from the text information retrieval community; LLM summarization of initial text outputs from videos is helpful, especially for noisy text coming from OCR; and no one modality is sufficient, with fusing outputs from all modalities resulting in significantly higher performance.</span></p><p><a href="https://hltcoe.jhu.edu/researcher/reno-kriz/" rel="nofollow external" class="bo"><span>Reno Kriz</span></a><span> is a research scientist at the Johns Hopkins University </span><a href="https://hltcoe.jhu.edu/" rel="nofollow external" class="bo"><span>Human Language Technology Center of Excellence</span></a><span> (</span><a href="https://hltcoe.jhu.edu/" rel="nofollow external" class="bo"><span>HLTCOE</span></a><span>). His primary research interests involve leveraging large pre-trained models for a variety of natural language understanding tasks, including those crossing into other modalities, e.g., vision and speech understanding. These multimodal interests have recently involved the 2024 Summer Camp for Language Exploration (SCALE) on event-centric video retrieval and understanding. He received his PhD from the University of Pennsylvania, where he worked with Chris Callison-Burch and Marianna Apidianaki on text simplification and natural language generation. Prior to that, he received BA degrees in Computer Science, Mathematics, and Economics from Vassar College.</span></p><p><span>Part of the<strong> <a href="https://laramartin.net/LaTeSS.html" rel="nofollow external" class="bo">UMBC Language Technology Seminar Series</a></strong></span></p></span>
    <hr><a href="https://ai.umbc.edu/" rel="nofollow external" class="bo"><strong>UMBC Center for AI</strong></a></div>
]]>
</Body>
<Summary>Takeaways from the SCALE 2024 Workshop on Event-Centric Video Retrieval  Reno Kriz, JHU HLTCOE  1:30-2:30 pm EDT Tuesday, October 8, 2024 ITE 325b, UMBC and online  Information dissemination for...</Summary>
<Website>https://my3.my.umbc.edu/groups/langtech/events/134555</Website>
<TrackingUrl>https://beta.my.umbc.edu/api/v0/pixel/news/144205/guest@my.umbc.edu/76480d509d15b5184c20f45141e37422/api/pixel</TrackingUrl>
<Tag>ai</Tag>
<Tag>events</Tag>
<Tag>language</Tag>
<Tag>nlp</Tag>
<Tag>video</Tag>
<Group token="umbc-ai">UMBC AI</Group>
<GroupUrl>https://beta.my.umbc.edu/groups/umbc-ai</GroupUrl>
<AvatarUrl>https://assets4-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xsmall.png?1691095779</AvatarUrl>
<AvatarUrl size="original">https://assets2-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/original.png?1691095779</AvatarUrl>
<AvatarUrl size="xxlarge">https://assets1-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xxlarge.png?1691095779</AvatarUrl>
<AvatarUrl size="xlarge">https://assets1-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xlarge.png?1691095779</AvatarUrl>
<AvatarUrl size="large">https://assets1-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/large.png?1691095779</AvatarUrl>
<AvatarUrl size="medium">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/medium.png?1691095779</AvatarUrl>
<AvatarUrl size="small">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/small.png?1691095779</AvatarUrl>
<AvatarUrl size="xsmall">https://assets4-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xsmall.png?1691095779</AvatarUrl>
<AvatarUrl size="xxsmall">https://assets1-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xxsmall.png?1691095779</AvatarUrl>
<Sponsor>Language Technology Seminar Series</Sponsor>
<PawCount>0</PawCount>
<CommentCount>0</CommentCount>
<CommentsAllowed>true</CommentsAllowed>
<PostedAt>Tue, 24 Sep 2024 18:37:45 -0400</PostedAt>
<EditAt>Tue, 24 Sep 2024 19:33:24 -0400</EditAt>
</NewsItem>

<NewsItem contentIssues="false" id="143688" important="false" status="posted" url="https://beta.my.umbc.edu/groups/umbc-ai/posts/143688">
<Title>talk: AI Resilient Interfaces for Code Generation and Efficient Reading, Dr. Jonathan Kummerfeld</Title>
<Tagline>3-4pm EDT, Tue. 10 Sept. 2024, ITE 325b at UMBC &amp; online</Tagline>
<Body>
<![CDATA[
    <div class="html-content"><span><p><span>The UMBC </span><a href="https://my3.my.umbc.edu/groups/langtech" rel="nofollow external" class="bo"><span><strong>Language Technology Seminar Series</strong></span></a><strong>  </strong><span>(LaTeSS – pronounced lattice) showcases talks from experts researching various language technologies, including but not limited to natural language processing, computational linguistics, speech processing, and digital humanities. UMBC people can join the group </span><a href="https://my3.my.umbc.edu/groups/langtech" rel="nofollow external" class="bo"><span><strong>here</strong></span></a><span>.</span></p><hr><br><h3><span>AI Resilient Interfaces for Code Generation and Efficient Reading</span></h3><h4><span>Dr. Jonathan K. Kummerfeld, University of Sydney</span></h4><p><strong><span>3-4 pm EDT Tuesday, 10 Sept. 2024, ITE 325b at UMBC and online via </span><a href="https://umbc.webex.com/meet/laramar" rel="nofollow external" class="bo"><span>WebEx</span></a></strong></p><p><span>AI is being integrated into virtually every computer system we use, but often in ways that mean we cannot see the decisions AI makes for us. If we don't see a decision, we cannot notice whether we agree with it, and what we don't notice, we cannot change. For example, using an AI summarization system means trusting that it has captured all the aspects of a document that are relevant to you. If the task is high stakes, then the only way to check is to read the original document, but that significantly decreases the value of the summary. In this talk, I will present the concept of AI resilient interfaces: systems that use AI while giving users the information they need to notice and change its decisions. I will walk through two examples of novel systems that are more AI resilient than the typical solution to the problem for (1) SQL generation and (2) faster reading. I will conclude with thoughts on the potential and pitfalls of designing with AI resilience in mind.</span></p><p><a href="https://jkk.name/" rel="nofollow external" class="bo"><span><strong>Jonathan K. Kummerfeld</strong></span></a><span> is a Senior Lecturer (i.e., research tenure-track Assistant Professor) in the School of Computer Science at the University of Sydney. He is currently also a </span><a href="https://www.arc.gov.au/funding-research/funding-schemes/discovery-program/discovery-early-career-researcher-award-decra" rel="nofollow external" class="bo"><span><strong>DECRA</strong></span></a><span>fellow, and collaborates with a range of academics across the world, including on DARPA-funded projects on AI agents that communicate. He completed his Ph.D. at the University of California, Berkeley, and was previously a postdoc at the University of Michigan, and a visiting scholar at Harvard. Jonathan’s research focuses on interactions between people and NLP systems, developing more effective algorithms, workflows, and systems for collaboration. He has been on the program committee for over 50 conferences and workshops. He currently serves as Co-CTO of ACL Rolling Review (a peer review system) and is a standing reviewer for the Computational Linguistics journal and the Transactions of the Association for Computational Linguistics journal.</span></p><br></span> <hr><a href="https://ai.umbc.edu/" rel="nofollow external" class="bo"><strong>UMBC Center for AI</strong></a></div>
]]>
</Body>
<Summary>The UMBC Language Technology Seminar Series  (LaTeSS – pronounced lattice) showcases talks from experts researching various language technologies, including but not limited to natural language...</Summary>
<Website>https://laramartin.net/LaTeSS</Website>
<TrackingUrl>https://beta.my.umbc.edu/api/v0/pixel/news/143688/guest@my.umbc.edu/715a05c531c2ad2dc335d3eeb3715d46/api/pixel</TrackingUrl>
<Tag>ai</Tag>
<Tag>llm</Tag>
<Tag>nlp</Tag>
<Group token="umbc-ai">UMBC AI</Group>
<GroupUrl>https://beta.my.umbc.edu/groups/umbc-ai</GroupUrl>
<AvatarUrl>https://assets4-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xsmall.png?1691095779</AvatarUrl>
<AvatarUrl size="original">https://assets2-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/original.png?1691095779</AvatarUrl>
<AvatarUrl size="xxlarge">https://assets1-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xxlarge.png?1691095779</AvatarUrl>
<AvatarUrl size="xlarge">https://assets1-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xlarge.png?1691095779</AvatarUrl>
<AvatarUrl size="large">https://assets1-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/large.png?1691095779</AvatarUrl>
<AvatarUrl size="medium">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/medium.png?1691095779</AvatarUrl>
<AvatarUrl size="small">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/small.png?1691095779</AvatarUrl>
<AvatarUrl size="xsmall">https://assets4-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xsmall.png?1691095779</AvatarUrl>
<AvatarUrl size="xxsmall">https://assets1-beta.my.umbc.edu/system/shared/avatars/groups/000/002/081/cfb27ebe008c2636486089a759ea5c36/xxsmall.png?1691095779</AvatarUrl>
<Sponsor>UMBC Language Technology Seminar Series</Sponsor>
<PawCount>0</PawCount>
<CommentCount>0</CommentCount>
<CommentsAllowed>true</CommentsAllowed>
<PostedAt>Sat, 07 Sep 2024 12:03:44 -0400</PostedAt>
</NewsItem>

</News>
