<?xml version="1.0"?>
<News hasArchived="false" page="1" pageCount="2" pageSize="10" timestamp="Mon, 20 Apr 2026 02:39:29 -0400" url="https://beta.my.umbc.edu/groups/cybersecurity/posts.xml?tag=ai">
<NewsItem contentIssues="true" id="148899" important="false" status="posted" url="https://beta.my.umbc.edu/groups/cybersecurity/posts/148899">
<Title>talk: Adaptive Domain Inference Attack with Concept Hierarchy 4/11</Title>
<Tagline>12-1pm EDT Friday, April 11, 2025, online</Tagline>
<Body>
<![CDATA[
    <div class="html-content"><h3><strong><span>Adaptive Domain Inference Attack with Concept Hierarchy</span></strong></h3><div><h4><strong><span><a href="https://www.csee.umbc.edu/keke-chen/" rel="nofollow external" class="bo">Professor Keke Chen</a>, CSEE, </span><span><span>UMBC</span></span></strong></h4><h5><span>12–1pm </span><span>Fri.,</span><span> April 11<span>, 2025 <a href="https://umbc.webex.com/meet/sherman" rel="nofollow external" class="bo">online</a></span></span></h5><p><span> </span><span>Joint work with</span><span> <a href="https://yue-chun.com/" rel="nofollow external" class="bo">Yuechun Gu</a> and </span><span><span><a href="https://www.linkedin.com/in/jiajie-he-672673243/" rel="nofollow external" class="bo">Jiajie He</a></span></span></p><p><span>To appear, 2015 Int. Conf. on Knowledge Discovery and Data Mining</span></p><p><span>With increasingly deployed deep neural networks in sensitive application domains, such as healthcare and security, it is essential to understand what kind of sensitive information can be inferred from these models. Most known model-targeted attacks assume attackers have learned the application domain or training data distribution to ensure successful attacks. Can removing the domain information from model APIs protect models from these attacks? Our work studies this critical problem. Unfortunately, even with minimal knowledge, i.e., accessing the model as an unnamed function without leaking the meaning of input and output, the proposed adaptive <em>domain inference (ADI)</em> attack can still successfully estimate relevant subsets of training data. We show that the extracted relevant data can significantly improve the performance of model-inversion attacks, for instance. Specifically, the ADI method uses the <em>concept hierarchy</em> extracted from the public and private datasets that the attacker can access, and it applies a novel algorithm to adaptively tune the likelihood of leaf concepts in the hierarchy showing up in the unseen training data. For comparison, we also designed a straightforward hypothesis-testing-based attack called LDI. Among all candidate methods, the ADI attack extracts partial training data at the concept level, converges fastest, and requires the fewest target-model accesses.</span></p><p><span><strong><a href="https://www.csee.umbc.edu/keke-chen/" rel="nofollow external" class="bo">Dr. Keke Chen</a></strong></span><span> is an associate professor in the UMBC CSEE Department. His recent research focuses on privacy and security issues with AI model training and deployment. He earned his PhD in computer science from Georgia Tech in 2006. Before joining UMBC, he was a Northwestern Mutual associate professor of computer science at Marquette University. </span></p><p><em><span>Support for this event was provided in part by the NSF under SFS grant </span></em><em><span>DGE-1753681<span>.</span></span></em></p></div></div>
]]>
</Body>
<Summary>Adaptive Domain Inference Attack with Concept Hierarchy   Professor Keke Chen, CSEE, UMBC  12–1pm Fri., April 11, 2025 online   Joint work with Yuechun Gu and Jiajie He  To appear, 2015 Int. Conf....</Summary>
<TrackingUrl>https://beta.my.umbc.edu/api/v0/pixel/news/148899/guest@my.umbc.edu/fe284a5648baf79678c69dec470ddc03/api/pixel</TrackingUrl>
<Tag>ai</Tag>
<Tag>cybersecurity</Tag>
<Tag>inference</Tag>
<Tag>ontology</Tag>
<Group token="cybersecurity">UMBC Cybersecurity Institute Group</Group>
<GroupUrl>https://beta.my.umbc.edu/groups/cybersecurity</GroupUrl>
<AvatarUrl>https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/xsmall.png?1734891477</AvatarUrl>
<AvatarUrl size="original">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/original.png?1734891477</AvatarUrl>
<AvatarUrl size="xxlarge">https://assets4-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/xxlarge.png?1734891477</AvatarUrl>
<AvatarUrl size="xlarge">https://assets4-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/xlarge.png?1734891477</AvatarUrl>
<AvatarUrl size="large">https://assets2-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/large.png?1734891477</AvatarUrl>
<AvatarUrl size="medium">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/medium.png?1734891477</AvatarUrl>
<AvatarUrl size="small">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/small.png?1734891477</AvatarUrl>
<AvatarUrl size="xsmall">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/xsmall.png?1734891477</AvatarUrl>
<AvatarUrl size="xxsmall">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/xxsmall.png?1734891477</AvatarUrl>
<Sponsor>UMBC Cybersecurity Institute Group</Sponsor>
<ThumbnailUrl size="xxlarge">https://assets3-beta.my.umbc.edu/system/shared/thumbnails/news/000/148/899/f18caf116a1b8c339e57e98733a9f42a/xxlarge.jpg?1744224742</ThumbnailUrl>
<ThumbnailUrl size="xlarge">https://assets1-beta.my.umbc.edu/system/shared/thumbnails/news/000/148/899/f18caf116a1b8c339e57e98733a9f42a/xlarge.jpg?1744224742</ThumbnailUrl>
<ThumbnailUrl size="large">https://assets2-beta.my.umbc.edu/system/shared/thumbnails/news/000/148/899/f18caf116a1b8c339e57e98733a9f42a/large.jpg?1744224742</ThumbnailUrl>
<ThumbnailUrl size="medium">https://assets1-beta.my.umbc.edu/system/shared/thumbnails/news/000/148/899/f18caf116a1b8c339e57e98733a9f42a/medium.jpg?1744224742</ThumbnailUrl>
<ThumbnailUrl size="small">https://assets3-beta.my.umbc.edu/system/shared/thumbnails/news/000/148/899/f18caf116a1b8c339e57e98733a9f42a/small.jpg?1744224742</ThumbnailUrl>
<ThumbnailUrl size="xsmall">https://assets3-beta.my.umbc.edu/system/shared/thumbnails/news/000/148/899/f18caf116a1b8c339e57e98733a9f42a/xsmall.jpg?1744224742</ThumbnailUrl>
<ThumbnailUrl size="xxsmall">https://assets2-beta.my.umbc.edu/system/shared/thumbnails/news/000/148/899/f18caf116a1b8c339e57e98733a9f42a/xxsmall.jpg?1744224742</ThumbnailUrl>
<PawCount>0</PawCount>
<CommentCount>0</CommentCount>
<CommentsAllowed>true</CommentsAllowed>
<PostedAt>Wed, 09 Apr 2025 14:54:25 -0400</PostedAt>
</NewsItem>

<NewsItem contentIssues="true" id="148342" important="false" status="posted" url="https://beta.my.umbc.edu/groups/cybersecurity/posts/148342">
<Title>Talk: A Privacy and Security Analysis of myUMBC Answers, 3/28</Title>
<Tagline>12-1pm ET Friday, March 28, 2025 online</Tagline>
<Body>
<![CDATA[
    <div class="html-content"><h3>A Privacy and Security Analysis of myUMBC Answers: UMBC SFS Scholar Winter Study 2025</h3><div><br></div><h4><a href="https://damslabumbc.github.io/author/christian-badolato/" rel="nofollow external" class="bo">Christian Badolato<br></a>12–1pm Friday, March 28, 2025 <a href="https://umbc.webex.com/meet/sherman" rel="nofollow external" class="bo">online</a></h4><div><br></div><div><a href="https://en.wikipedia.org/wiki/Generative_artificial_intelligence" rel="nofollow external" class="bo"><strong>Generative AI</strong></a> has the potential to improve user search experiences by supporting natural language querying and providing more detailed and domain-specific responses. UMBC is seeking to provide this convenience to myUMBC users through the <a href="https://my3.my.umbc.edu/groups/doit/posts/147188" rel="nofollow external" class="bo"><strong>myUMBC Answers</strong></a> system, which enables users to access both personal and UMBC services information from the myUMBC search bar. In this talk, we investigate the resiliency of Answers against several common generative AI attacks that were performed by the UMBC Scholarship for Service (SFS) scholars in collaboration with other students and the UMBC Division of Information Technology (DoIT). We first provide an overview of the study and the myUMBC Answers system before discussing the types of attacks which were launched against the system. We then explore the behavior of the Answers system in response to these attacks. Finally, we outline the recommendations provided to DoIT by the study participants to improve the security and user experience of myUMBC Answers.</div><div><br></div><div><a href="null" rel="nofollow external" class="bo"><strong>Christian Badolato</strong></a> is a PhD student working with Professor <a href="https://robertoyus.com/" rel="nofollow external" class="bo"><strong>Roberto Yus</strong></a> focusing on data privacy in the Internet of Things at UMBC after having received his master’s degree from the same university. He has several years of experience as a software architect and is a Certified Information Systems Security Professional.</div><div><br></div><div>Support for this <a href="https://cisa.umbc.edu/" rel="nofollow external" class="bo"><strong>UMBC Cyber Defense Lab</strong></a> event was provided in part by NSF under SFS grant DGE-1753681.</div><div><br></div></div>
]]>
</Body>
<Summary>A Privacy and Security Analysis of myUMBC Answers: UMBC SFS Scholar Winter Study 2025     Christian Badolato 12–1pm Friday, March 28, 2025 online     Generative AI has the potential to improve...</Summary>
<TrackingUrl>https://beta.my.umbc.edu/api/v0/pixel/news/148342/guest@my.umbc.edu/4d1af12d45ef4b1f267e575d72a7ef4b/api/pixel</TrackingUrl>
<Tag>ai</Tag>
<Tag>privacy</Tag>
<Tag>security</Tag>
<Tag>umbc</Tag>
<Group token="cybersecurity">UMBC Cybersecurity Institute Group</Group>
<GroupUrl>https://beta.my.umbc.edu/groups/cybersecurity</GroupUrl>
<AvatarUrl>https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/xsmall.png?1734891477</AvatarUrl>
<AvatarUrl size="original">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/original.png?1734891477</AvatarUrl>
<AvatarUrl size="xxlarge">https://assets4-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/xxlarge.png?1734891477</AvatarUrl>
<AvatarUrl size="xlarge">https://assets4-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/xlarge.png?1734891477</AvatarUrl>
<AvatarUrl size="large">https://assets2-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/large.png?1734891477</AvatarUrl>
<AvatarUrl size="medium">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/medium.png?1734891477</AvatarUrl>
<AvatarUrl size="small">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/small.png?1734891477</AvatarUrl>
<AvatarUrl size="xsmall">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/xsmall.png?1734891477</AvatarUrl>
<AvatarUrl size="xxsmall">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/xxsmall.png?1734891477</AvatarUrl>
<Sponsor>The UMBC Cyber Defense Lab</Sponsor>
<ThumbnailUrl size="xxlarge">https://assets2-beta.my.umbc.edu/system/shared/thumbnails/news/000/148/342/7f9291521079d284b6738c920d38f486/xxlarge.jpg?1743014997</ThumbnailUrl>
<ThumbnailUrl size="xlarge">https://assets3-beta.my.umbc.edu/system/shared/thumbnails/news/000/148/342/7f9291521079d284b6738c920d38f486/xlarge.jpg?1743014997</ThumbnailUrl>
<ThumbnailUrl size="large">https://assets3-beta.my.umbc.edu/system/shared/thumbnails/news/000/148/342/7f9291521079d284b6738c920d38f486/large.jpg?1743014997</ThumbnailUrl>
<ThumbnailUrl size="medium">https://assets1-beta.my.umbc.edu/system/shared/thumbnails/news/000/148/342/7f9291521079d284b6738c920d38f486/medium.jpg?1743014997</ThumbnailUrl>
<ThumbnailUrl size="small">https://assets3-beta.my.umbc.edu/system/shared/thumbnails/news/000/148/342/7f9291521079d284b6738c920d38f486/small.jpg?1743014997</ThumbnailUrl>
<ThumbnailUrl size="xsmall">https://assets1-beta.my.umbc.edu/system/shared/thumbnails/news/000/148/342/7f9291521079d284b6738c920d38f486/xsmall.jpg?1743014997</ThumbnailUrl>
<ThumbnailUrl size="xxsmall">https://assets3-beta.my.umbc.edu/system/shared/thumbnails/news/000/148/342/7f9291521079d284b6738c920d38f486/xxsmall.jpg?1743014997</ThumbnailUrl>
<PawCount>0</PawCount>
<CommentCount>0</CommentCount>
<CommentsAllowed>true</CommentsAllowed>
<PostedAt>Wed, 26 Mar 2025 13:47:00 -0400</PostedAt>
<EditAt>Wed, 26 Mar 2025 15:24:07 -0400</EditAt>
</NewsItem>

<NewsItem contentIssues="true" id="147556" important="false" status="posted" url="https://beta.my.umbc.edu/groups/cybersecurity/posts/147556">
<Title>talk: The New Manhattan Project = Militarized AI</Title>
<Tagline>12&#8211;1pm EST, Friday, February 28, 2025, online</Tagline>
<Body>
<![CDATA[
    <div class="html-content"><h3>The New Manhattan Project = Militarized AI</h3><h4><a href="https://www.rit.edu/directory/jxpics-justin-pelletier" rel="nofollow external" class="bo">Justin M. Pelletier</a>, Rochester Institute of Technology</h4><h4>12–1pm EST, Friday, February 28, 2025, <a href="https://umbc.webex.com/meet/sherman" rel="nofollow external" class="bo">online</a> </h4><div><br></div><div><br></div><div>The resurgence of great power competition, underpinned by rapid advancements in artificial intelligence, necessitates a reevaluation of strategic doctrines akin to the urgency and innovation of the original Manhattan Project. This talk delves into the transformative integration of AI with autonomous combat units, examining historical analogs such as the impact of gunpowder in the Napoleonic Wars and the introduction of tanks and close air support during World War I, and juxtaposing these with the contemporary role of AI in warfare.</div><div><br></div><div>We begin by exploring the dual-use nature of AI technologies, emphasizing their role in both enhancing combat effectiveness and posing significant ethical and security risks, as illustrated by recent developments in narrative warfare and the militarization of marketing strategies. Drawing parallels with the disruptive impacts of past technological advances, the presentation invites an evaluation of the strategic implications of autonomous warfare systems, discussing the potential consequences on global security dynamics.  Furthermore, the discussion extends to safeguarding democratic processes in the age of AI, where the integrity of elections is increasingly susceptible to AI-driven information warfare. The presentation outlines the development of virtual voting infrastructures and their vulnerabilities, highlighting the ongoing challenges in protecting electoral systems from manipulation.</div><div><br></div><div>This examination advocates for robust ethical frameworks and international cooperation to harness AI's potential while mitigating its risks. By reflecting on historical technology shifts and forecasting future developments, the talk aims to widen the dialogue on the strategic, ethical, and policy dimensions necessary to navigate this new era of warfare and surveillance.</div><div><br></div><div><a href="https://www.rit.edu/directory/jxpics-justin-pelletier" rel="nofollow external" class="bo"><strong>Justin M. Pelletier</strong></a> is a Professor of Practice and Director of the <a href="https://www.rit.edu/cybersecurity/cyber-range" rel="nofollow external" class="bo"><strong>Cyber Range</strong></a> at the Rochester Institute of Technology (RIT). Dr. Pelletier teaches at the undergraduate and graduate levels in the Department of Cyber Security within RIT’s Golisano College of Computing and Information Sciences. He also orchestrates security assessments for partner organizations and is the founding director for the NSA-funded National Consortium for Cyber Governance, Risk and Compliance, which is housed within RIT's <a href="https://www.rit.edu/cybersecurity/" rel="nofollow external" class="bo"><strong>ESL Global Cybersecurity Institute</strong></a>. He holds a PhD in Information Assurance and Security, an MBA in Entrepreneurship, and a BS in Computer Science. Prior to joining academia, Dr.  Pelletier was a civil servant in the intelligence community and a member of the modeling and simulations working group within the U.S. National Security Council. He is a combat veteran and currently serves as a Lieutenant Colonel in the U.S.  Army Reserve. Dr. Pelletier has authored more than three dozen scholarly articles, book chapters, and patents focused on security and information economics.</div><div><br></div>
    <hr><a href="https://ai.umbc.edu/" rel="nofollow external" class="bo"><strong>UMBC Center for AI</strong></a></div>
]]>
</Body>
<Summary>The New Manhattan Project = Militarized AI  Justin M. Pelletier, Rochester Institute of Technology  12–1pm EST, Friday, February 28, 2025, online         The resurgence of great power competition,...</Summary>
<TrackingUrl>https://beta.my.umbc.edu/api/v0/pixel/news/147556/guest@my.umbc.edu/1d465829f3bca5243c68d05a7547d1ac/api/pixel</TrackingUrl>
<Tag>ai</Tag>
<Tag>military</Tag>
<Group token="cybersecurity">UMBC Cybersecurity Institute Group</Group>
<GroupUrl>https://beta.my.umbc.edu/groups/cybersecurity</GroupUrl>
<AvatarUrl>https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/xsmall.png?1734891477</AvatarUrl>
<AvatarUrl size="original">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/original.png?1734891477</AvatarUrl>
<AvatarUrl size="xxlarge">https://assets4-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/xxlarge.png?1734891477</AvatarUrl>
<AvatarUrl size="xlarge">https://assets4-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/xlarge.png?1734891477</AvatarUrl>
<AvatarUrl size="large">https://assets2-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/large.png?1734891477</AvatarUrl>
<AvatarUrl size="medium">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/medium.png?1734891477</AvatarUrl>
<AvatarUrl size="small">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/small.png?1734891477</AvatarUrl>
<AvatarUrl size="xsmall">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/xsmall.png?1734891477</AvatarUrl>
<AvatarUrl size="xxsmall">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/xxsmall.png?1734891477</AvatarUrl>
<Sponsor>UMBC Cyber Defense Lab</Sponsor>
<ThumbnailUrl size="xxlarge">https://assets3-beta.my.umbc.edu/system/shared/thumbnails/news/000/147/556/89bb032e13438186c4b979793e9cd5d2/xxlarge.jpg?1740268345</ThumbnailUrl>
<ThumbnailUrl size="xlarge">https://assets1-beta.my.umbc.edu/system/shared/thumbnails/news/000/147/556/89bb032e13438186c4b979793e9cd5d2/xlarge.jpg?1740268345</ThumbnailUrl>
<ThumbnailUrl size="large">https://assets2-beta.my.umbc.edu/system/shared/thumbnails/news/000/147/556/89bb032e13438186c4b979793e9cd5d2/large.jpg?1740268345</ThumbnailUrl>
<ThumbnailUrl size="medium">https://assets2-beta.my.umbc.edu/system/shared/thumbnails/news/000/147/556/89bb032e13438186c4b979793e9cd5d2/medium.jpg?1740268345</ThumbnailUrl>
<ThumbnailUrl size="small">https://assets1-beta.my.umbc.edu/system/shared/thumbnails/news/000/147/556/89bb032e13438186c4b979793e9cd5d2/small.jpg?1740268345</ThumbnailUrl>
<ThumbnailUrl size="xsmall">https://assets3-beta.my.umbc.edu/system/shared/thumbnails/news/000/147/556/89bb032e13438186c4b979793e9cd5d2/xsmall.jpg?1740268345</ThumbnailUrl>
<ThumbnailUrl size="xxsmall">https://assets3-beta.my.umbc.edu/system/shared/thumbnails/news/000/147/556/89bb032e13438186c4b979793e9cd5d2/xxsmall.jpg?1740268345</ThumbnailUrl>
<PawCount>1</PawCount>
<CommentCount>0</CommentCount>
<CommentsAllowed>true</CommentsAllowed>
<PostedAt>Sat, 22 Feb 2025 18:57:30 -0500</PostedAt>
<EditAt>Sat, 08 Mar 2025 15:51:17 -0500</EditAt>
</NewsItem>

<NewsItem contentIssues="false" id="147334" important="false" status="posted" url="https://beta.my.umbc.edu/groups/cybersecurity/posts/147334">
<Title>Talk: Unveiling Privacy Risks in AI: Data, Models, and Systems</Title>
<Tagline>11:30-12:30 &#8203;Friday, February 14 in ITE325b and online&#8203;</Tagline>
<Body>
<![CDATA[
    <div class="html-content"><div><span><strong><a href="https://www.cs.purdue.edu/homes/an93/" rel="nofollow external" class="bo">​Shengwei An</a> </strong>will give a talk on </span><strong>Unveiling Privacy Risks in AI: Data, Models, &amp; Systems</strong>, 11:30-12:30 <span>​Friday, February 14 in </span>ITE325b and <a href="https://my3.my.umbc.edu/groups/csee/events/140124/join_meeting" rel="nofollow external" class="bo">online</a><span>​.</span></div><div><br></div>Artificial Intelligence has become deeply integrated into diverse systems, transforming industries and reshaping our daily lives. However, this widespread adoption also introduces critical privacy risks across the training data, AI models, and AI-powered systems. This talk will explore privacy challenges through these three aspects. First, I will introduce the first high-fidelity attack that exposes the privacy vulnerabilities of training data in pre-trained models and commercial AI services. Next, I will present a novel physical impersonating attack that highlights the privacy risks inherent in AI-based authentication systems. Additionally, I will discuss the first data-free framework designed to eliminate trigger-based model watermarks in diffusion models that aim to protect their intellectual property. Finally, I will conclude with a forward-looking perspective on addressing privacy risks in emerging generative AI techniques, such as Large Language Models and Stable Diffusion Models.<div><br><div><div><span><p><a href="https://www.cs.purdue.edu/homes/an93/" rel="nofollow external" class="bo"> Shengwei An</a> is a Ph.D. candidate in the Department of Computer Science at Purdue University, advised by Prof. Xiangyu Zhang. His research focuses on AI security and privacy, with an emphasis on designing state-of-the-art tools to investigate and mitigate privacy vulnerabilities in real-world AI systems. His work has been published in top-tier conferences, including S&amp;P, USENIX Security, NDSS, and AAAI. He is the recipient of the Ross Fellowship from Purdue University and the Best Paper Award in the ECCV 2022 AROW Workshop.<br></p><p><br></p></span></div></div></div>
    <hr><a href="https://cybersecurity.umbc.edu/" rel="nofollow external" class="bo"><strong>UMBC Cybersecurity Institute</strong></a></div>
]]>
</Body>
<Summary>​Shengwei An will give a talk on Unveiling Privacy Risks in AI: Data, Models, &amp; Systems, 11:30-12:30 ​Friday, February 14 in ITE325b and online​.    Artificial Intelligence has become deeply...</Summary>
<TrackingUrl>https://beta.my.umbc.edu/api/v0/pixel/news/147334/guest@my.umbc.edu/843715b46ccd321b4da77b4ef49ddaa4/api/pixel</TrackingUrl>
<Tag>ai</Tag>
<Tag>cybersecurity</Tag>
<Tag>privacy</Tag>
<Group token="cybersecurity">UMBC Cybersecurity Institute Group</Group>
<GroupUrl>https://beta.my.umbc.edu/groups/cybersecurity</GroupUrl>
<AvatarUrl>https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/xsmall.png?1734891477</AvatarUrl>
<AvatarUrl size="original">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/original.png?1734891477</AvatarUrl>
<AvatarUrl size="xxlarge">https://assets4-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/xxlarge.png?1734891477</AvatarUrl>
<AvatarUrl size="xlarge">https://assets4-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/xlarge.png?1734891477</AvatarUrl>
<AvatarUrl size="large">https://assets2-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/large.png?1734891477</AvatarUrl>
<AvatarUrl size="medium">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/medium.png?1734891477</AvatarUrl>
<AvatarUrl size="small">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/small.png?1734891477</AvatarUrl>
<AvatarUrl size="xsmall">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/xsmall.png?1734891477</AvatarUrl>
<AvatarUrl size="xxsmall">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/xxsmall.png?1734891477</AvatarUrl>
<Sponsor>UMBC Cybersecurity Institute</Sponsor>
<PawCount>0</PawCount>
<CommentCount>0</CommentCount>
<CommentsAllowed>true</CommentsAllowed>
<PostedAt>Fri, 14 Feb 2025 09:36:43 -0500</PostedAt>
<EditAt>Thu, 13 Mar 2025 17:14:24 -0400</EditAt>
</NewsItem>

<NewsItem contentIssues="true" id="146826" important="false" status="posted" url="https://beta.my.umbc.edu/groups/cybersecurity/posts/146826">
<Title>Talk: Do LLMs Exhibit Cybersecurity Misconceptions? 1/31 online</Title>
<Tagline>Evaluation of LLMs on CCI and CCA examinations</Tagline>
<Body>
<![CDATA[
    <div class="html-content"><h4>Do LLMs Show Cybersecurity Misconceptions?<br></h4><h5>Evaluation of LLMs Performance on Cybersecurity Concept Inventories</h5><h5>Shan Huang, UIUC</h5><div><strong>Joint work with Jeffrey Herman and Alan Sherman, et al.</strong></div><div><strong>12:00–1pm ET Friday, Jan. 31, 2025, <a href="https://umbc.webex.com/meet/sherman" rel="nofollow external" class="bo">online</a></strong> </div><div><br></div><div>We evaluated the performance of five LLMs (Llama a, GPT-3.5-turbo, GPT-4, GPT-4O, and GPT-O1) on two cybersecurity concept inventories: <a href="https://dl.acm.org/doi/fullHtml/10.1145/3451346" rel="nofollow external" class="bo"><strong>Cybersecurity Concept Inventory</strong></a> (CCI) and <strong><a href="https://dl.acm.org/doi/10.1145/3545945.3569762" rel="nofollow external" class="bo">Cybersecurity Curriculum Assessment</a> </strong>(CCA). Using a zero-shot setting to minimize external influencing factors, we compared the performance of these LLMs with that of students previously studied, and we conducted a qualitative analysis of GPT-O1's output to examine if it exhibits misconceptions. Quantitative analysis reveals that, for the CCI and CCA, GPT-O1 significantly outperformed other models and students, correctly answering 92% of CCI and 72% of CCA test items. These results indicate GPT-O1’s strong proficiency in foundational topics (CCI) but reveal its limitations in addressing these concepts in more technically advanced scenarios (CCA). Qualitative analysis of GPT-O1’s reasoning patterns uncovered instances of insightful reasoning but also highlighted ways in which GPT-O1's answers reflect persistent student mistakes, such as biases, overgeneralizations, and logical inconsistencies. This work highlights the significant potential of GPT-O1 as a tool for introductory cybersecurity education in its ability to provide detailed explanations and structured reasoning for novice learners.</div><div><br></div><div><strong><a href="https://www.linkedin.com/in/shan-huang-262041193/" rel="nofollow external" class="bo">Shan Huang</a> </strong>is a Ph.D. candidate in Computer Science at the University of Illinois Urbana-Champaign. She is broadly interested in how educational games can improve student learning. Current work includes improving student learning in cybersecurity with educational games and accessing student knowledge of cybersecurity concepts. Shan is also involved in various educational data mining projects.</div><div><br></div><hr><a href="https://cybersecurity.umbc.edu/" rel="nofollow external" class="bo"><strong>UMBC Cybersecurity Institute</strong></a></div>
]]>
</Body>
<Summary>Do LLMs Show Cybersecurity Misconceptions?   Evaluation of LLMs Performance on Cybersecurity Concept Inventories  Shan Huang, UIUC  Joint work with Jeffrey Herman and Alan Sherman, et al....</Summary>
<Website>https://cybersecurity.umbc.edu/</Website>
<TrackingUrl>https://beta.my.umbc.edu/api/v0/pixel/news/146826/guest@my.umbc.edu/34eff5855f2e469c84e2d84660d9f67c/api/pixel</TrackingUrl>
<Tag>ai</Tag>
<Tag>cca</Tag>
<Tag>cci</Tag>
<Tag>cybersecurity</Tag>
<Tag>llm</Tag>
<Group token="cybersecurity">UMBC Cybersecurity Institute Group</Group>
<GroupUrl>https://beta.my.umbc.edu/groups/cybersecurity</GroupUrl>
<AvatarUrl>https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/xsmall.png?1734891477</AvatarUrl>
<AvatarUrl size="original">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/original.png?1734891477</AvatarUrl>
<AvatarUrl size="xxlarge">https://assets4-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/xxlarge.png?1734891477</AvatarUrl>
<AvatarUrl size="xlarge">https://assets4-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/xlarge.png?1734891477</AvatarUrl>
<AvatarUrl size="large">https://assets2-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/large.png?1734891477</AvatarUrl>
<AvatarUrl size="medium">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/medium.png?1734891477</AvatarUrl>
<AvatarUrl size="small">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/small.png?1734891477</AvatarUrl>
<AvatarUrl size="xsmall">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/xsmall.png?1734891477</AvatarUrl>
<AvatarUrl size="xxsmall">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/xxsmall.png?1734891477</AvatarUrl>
<Sponsor>UMBC Cyber Defense Lab</Sponsor>
<ThumbnailUrl size="xxlarge">https://assets4-beta.my.umbc.edu/system/shared/thumbnails/news/000/146/826/e6632845d26b56a6caf99baab2f84036/xxlarge.jpg?1738158075</ThumbnailUrl>
<ThumbnailUrl size="xlarge">https://assets2-beta.my.umbc.edu/system/shared/thumbnails/news/000/146/826/e6632845d26b56a6caf99baab2f84036/xlarge.jpg?1738158075</ThumbnailUrl>
<ThumbnailUrl size="large">https://assets1-beta.my.umbc.edu/system/shared/thumbnails/news/000/146/826/e6632845d26b56a6caf99baab2f84036/large.jpg?1738158075</ThumbnailUrl>
<ThumbnailUrl size="medium">https://assets1-beta.my.umbc.edu/system/shared/thumbnails/news/000/146/826/e6632845d26b56a6caf99baab2f84036/medium.jpg?1738158075</ThumbnailUrl>
<ThumbnailUrl size="small">https://assets3-beta.my.umbc.edu/system/shared/thumbnails/news/000/146/826/e6632845d26b56a6caf99baab2f84036/small.jpg?1738158075</ThumbnailUrl>
<ThumbnailUrl size="xsmall">https://assets3-beta.my.umbc.edu/system/shared/thumbnails/news/000/146/826/e6632845d26b56a6caf99baab2f84036/xsmall.jpg?1738158075</ThumbnailUrl>
<ThumbnailUrl size="xxsmall">https://assets4-beta.my.umbc.edu/system/shared/thumbnails/news/000/146/826/e6632845d26b56a6caf99baab2f84036/xxsmall.jpg?1738158075</ThumbnailUrl>
<PawCount>0</PawCount>
<CommentCount>0</CommentCount>
<CommentsAllowed>true</CommentsAllowed>
<PostedAt>Wed, 29 Jan 2025 08:42:49 -0500</PostedAt>
</NewsItem>

<NewsItem contentIssues="false" id="146655" important="false" status="posted" url="https://beta.my.umbc.edu/groups/cybersecurity/posts/146655">
<Title>Talk: Securing Distributed Networks with Reinforcement Learning &amp; Game Theory</Title>
<Tagline>10-11am Thursday, Jan. 30, 2025; ITE459 and online</Tagline>
<Body>
<![CDATA[
    <div class="html-content"><span><h4><span>Securing Distributed Networks: Leveraging Reinforcement Learning and Game Theory for Attack Detection and Mitigation</span></h4><h4><a href="https://ischool.syracuse.edu/md-tariqul-islam-pavel/#Biography" rel="nofollow external" class="bo"><span><strong>Dr. Md Tariqul Islam</strong></span></a><span>, Syracuse University</span></h4><h4><span>10-11am January 30, 2025;  ITE 459, UMBC and </span><a href="https://umbc.webex.com/umbc/j.php?MTID=m47153e19db08254c1e0d30e43cad1b24" rel="nofollow external" class="bo"><span>online</span></a></h4><p><br></p><p><span>Reinforcement learning (RL) has demonstrated remarkable success across diverse domains, from mastering complex games to optimizing real-time feedback systems in robotics and industrial control. However, its potential in cybersecurity, particularly for autonomous attack detection and mitigation in distributed systems, remains largely underexplored. Traditional single-agent RL approaches struggle in decentralized environments where multiple entities make independent decisions, necessitating multi-agent reinforcement learning (MARL). Our research explores blockchain networks as an ideal test case due to their decentralized architecture and trustless consensus mechanisms. We developed a novel MARL-based consensus mechanism for Proof-of-Stake blockchains, enabling nodes to collaboratively identify and penalize malicious behavior while preserving decentralization. This approach </span><span>effectively mitigated six major blockchain attack types with minimal computational overhead. Building on these results, we propose integrating game-theoretic principles into the MARL framework to model adversarial strategies and enhance system resilience. The synergy between reinforcement learning and game theory establishes a robust foundation for dynamic and adaptive security in distributed systems, effectively addressing current vulnerabilities while anticipating and countering future threats. This integrated approach enables the design of resilient, scalable defense mechanisms tailored to the complex dynamics of decentralized architectures.</span></p><p><span><br></span></p><a href="https://ischool.syracuse.edu/md-tariqul-islam-pavel/#Biography" rel="nofollow external" class="bo"><span><strong>Dr. Md Tariqul Islam</strong></span></a><span> is an Assistant Professor of Trustworthy Cyberspace in the School of Information Studies (iSchool) at Syracuse University. His research focuses on advancing the security, efficiency, and fault tolerance of networks and distributed systems, particularly in the domains of cloud and blockchain technologies. To this end, he designs and develops novel algorithms, protocols, and frameworks that enhance system reliability and security. In his doctoral dissertation, "Algorithms for Achieving Fault-Tolerance and Ensuring Security in Cloud Computing Systems," he developed dynamic scheduling algorithms for cloud computing that optimize resource usage and reduce the risk of system failures. He also devised several cloud storage schemes to protect data confidentiality, integrity, and availability while mitigating potential security vulnerabilities. Expanding his work to blockchain, his current research seeks to strengthen the security of the Proof-of-Stake (PoS) consensus mechanism by using multi-agent reinforcement learning (MRL) to detect malicious nodes in blockchain network and integrating Game Theory and Zero-Shot Learning (ZSL) to ensure consensus integrity. His long-term vision is to build resilient distributed networks that prioritize security, trust, and scalability and support the evolving demands of next-generation decentralized applications. Dr. Islam earned his bachelor’s degree in Computer Science and Engineering from the University of Dhaka, Bangladesh (2008), and both a master’s (2016) and Ph.D. (2020) in Computer Science from the University of Kentucky.</span></span></div>
]]>
</Body>
<Summary>Securing Distributed Networks: Leveraging Reinforcement Learning and Game Theory for Attack Detection and Mitigation  Dr. Md Tariqul Islam, Syracuse University  10-11am January 30, 2025;  ITE 459,...</Summary>
<Website>https://informationsystems.umbc.edu/home/calendar/events/</Website>
<TrackingUrl>https://beta.my.umbc.edu/api/v0/pixel/news/146655/guest@my.umbc.edu/1662b1eba791abbe21e897be7fc2f7e4/api/pixel</TrackingUrl>
<Tag>ai</Tag>
<Tag>cybersecurity</Tag>
<Tag>machine-learning</Tag>
<Tag>reinfircement-learning</Tag>
<Group token="cybersecurity">UMBC Cybersecurity Institute Group</Group>
<GroupUrl>https://beta.my.umbc.edu/groups/cybersecurity</GroupUrl>
<AvatarUrl>https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/xsmall.png?1734891477</AvatarUrl>
<AvatarUrl size="original">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/original.png?1734891477</AvatarUrl>
<AvatarUrl size="xxlarge">https://assets4-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/xxlarge.png?1734891477</AvatarUrl>
<AvatarUrl size="xlarge">https://assets4-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/xlarge.png?1734891477</AvatarUrl>
<AvatarUrl size="large">https://assets2-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/large.png?1734891477</AvatarUrl>
<AvatarUrl size="medium">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/medium.png?1734891477</AvatarUrl>
<AvatarUrl size="small">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/small.png?1734891477</AvatarUrl>
<AvatarUrl size="xsmall">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/xsmall.png?1734891477</AvatarUrl>
<AvatarUrl size="xxsmall">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/xxsmall.png?1734891477</AvatarUrl>
<Sponsor>UMBC Cybersecurity Institute Group</Sponsor>
<PawCount>0</PawCount>
<CommentCount>0</CommentCount>
<CommentsAllowed>true</CommentsAllowed>
<PostedAt>Wed, 22 Jan 2025 10:20:34 -0500</PostedAt>
<EditAt>Sat, 08 Mar 2025 15:52:14 -0500</EditAt>
</NewsItem>

<NewsItem contentIssues="true" id="146467" important="false" status="posted" url="https://beta.my.umbc.edu/groups/cybersecurity/posts/146467">
<Title>CodeBot'25 Workshop: Can We Trust AI-Generated Code?</Title>
<Tagline>Workshop Feb. 25-26, 2025 in Columbia, MD and online</Tagline>
<Body>
<![CDATA[
    <div class="html-content"><div><div><h3><strong>Can We Trust AI-Generated Code?</strong></h3></div><h5><strong>Workshop sponsored by UMBC &amp; Army Research Laboratory</strong></h5><h5><span>Feb. 25-26, 2025 </span><span>UMBC Training Centers, Columbia, MD &amp; online<br><br><p>
    position paper deadline extended to 1/20/2025</p></span></h5>The era of generative AI is upon us, and chatbots such as chatGPT are being used by programmers at all levels of experience to produce code.  Some generative AI systems, such as <a href="https://cloud.google.com/gemini/docs/codeassist/overview" rel="nofollow external" class="bo"><strong>Gemini Code Assist</strong></a>, specialize in code generation.  Unfortunately, AI-generated code often contains errors in the form of functionality that fails to meet specifications or vulnerabilities that can be exploited by hackers.  People have been working on program verification and secure coding for sixty years, but even so, the skill needed to find such errors is possessed by only a fraction of software engineers, and these skills are not being passed on to student programmers as they should be.<br><br>The goal of this FREE workshop is to gather and produce actionable ideas and suggestions that may be of use to the IT profession.  The workshop will consist of invited speakers, panels, and open discussion. </div><div><br></div><div><strong>We invite would-be participants to submit short position papers offering comments, observations, experiences, and suggestions that pertain to any or all of the following workshop themes:</strong><br><ol><li>What is or could be done to make AI-generated code more trustworthy, from the perspective of functionality and/or cybersecurity?</li><li>How can we do better at instilling the ideas and tools of secure development into the software profession?</li><li>Being able to produce quality code, with or without the aid of AI, seems to be related to system skills in general. How can we do better at giving students these skills before (or as) they enter the workplace?</li></ol>Position papers should limited to three pages and submitted according to this <a href="https://docs.google.com/document/d/11nr-Zy2MPObMYihN2x_v2jS7EcUkOLXm/edit?usp=sharing&amp;ouid=117342243438066964240&amp;rtpof=true&amp;sd=true" rel="nofollow external" class="bo"><strong>template</strong></a>.  Submit your position paper via email to <a href="mailto:codebot25@umbc.edu" rel="nofollow external" class="bo"><strong>codebot25@umbc.edu</strong></a> after <strong><a href="https://forms.gle/CipmPbbBVBLfHc728" rel="nofollow external" class="bo">registering</a> </strong>for the workshop.</div><div><br></div><div>The organizing committee will select several papers for live presentation at the workshop. Selection will be based on relevance to the workshop themes, technical merit, and perceived interest to the audience.  Position papers that are mere marketing pieces will not be considered, but descriptions of hardware and software solutions tying into the themes described above are welcome. Limited travel support may be available for non-local speakers. Position papers and summaries of the discussions that follow will make up the core of the workshop report.<br><br>UMBC students, both graduate or undergraduate, are welcome to submit position papers that describe their own personal experience and observations with AI-generated code in their own words.  Students may include their resumes with position papers if they wish to have their work/resume circulated to other attendees.  Domestic and international students are welcome to participate in this workshop.<br><br><strong>Important Dates:</strong><br></div><div>  <strong>Position paper submission deadline: January 20, 2025</strong></div><div>  Notice of acceptance: January 31, 2025<br>  Registration deadline: February 18, 2025<br>    (no registration fee, but space is limited)<br>  Workshop dates: February 25-26, 2025<br><br>The workshop will take place at <strong><a href="https://www.umbctraining.com/" rel="nofollow external" class="bo">UMBC Training Centers</a></strong>, 6996 Columbia Gateway Dr #100, Columbia, MD 21046</div><div><br></div><div><strong>REGISTER </strong>@ <a href="https://forms.gle/CipmPbbBVBLfHc728" rel="nofollow external" class="bo"><strong>https://forms.gle/CipmPbbBVBLfHc728</strong></a><br><br><strong>In-person space is limited, so register early! Based on RSVPs received, the organizing committee reserves the right to be selective in whom it selects to join the in-person meeting.</strong></div><div><br>Instructions for virtual participation will be made available prior to the workshop.<br><br><strong>Organizing Committee:</strong><br>  Prajna Bhandary, UMBC<br>  Mike De Lucia, Army Research Laboratory<br>  Richard Forno, UMBC<br>  Lindsay Gaughan, UMBC Training Centers<br>  Cynthia Matuszek, UMBC<br>  Charles Nicholas, UMBC<br>  Steve Simske, Colorado State University<br>  Larry Wagoner, Dept. of Defense<br>  Linda Kidder Yarlott, UMBC<br>  Paul Yu, Army Research Laboratory<br><br></div><div>Questions? Send email to <a href="mailto:codebot25@umbc.edu" rel="nofollow external" class="bo"><strong>codebot25@umbc.edu</strong></a></div>
    <br></div>
]]>
</Body>
<Summary>Can We Trust AI-Generated Code?   Workshop sponsored by UMBC &amp; Army Research Laboratory  Feb. 25-26, 2025 UMBC Training Centers, Columbia, MD &amp; online    position paper deadline extended...</Summary>
<TrackingUrl>https://beta.my.umbc.edu/api/v0/pixel/news/146467/guest@my.umbc.edu/3963cf670aae6c0ef88607f359dd9477/api/pixel</TrackingUrl>
<Tag>ai</Tag>
<Tag>assistant</Tag>
<Tag>codebot</Tag>
<Tag>coding</Tag>
<Tag>genai</Tag>
<Tag>llm</Tag>
<Tag>workshop</Tag>
<Group token="cybersecurity">UMBC Cybersecurity Institute Group</Group>
<GroupUrl>https://beta.my.umbc.edu/groups/cybersecurity</GroupUrl>
<AvatarUrl>https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/xsmall.png?1734891477</AvatarUrl>
<AvatarUrl size="original">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/original.png?1734891477</AvatarUrl>
<AvatarUrl size="xxlarge">https://assets4-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/xxlarge.png?1734891477</AvatarUrl>
<AvatarUrl size="xlarge">https://assets4-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/xlarge.png?1734891477</AvatarUrl>
<AvatarUrl size="large">https://assets2-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/large.png?1734891477</AvatarUrl>
<AvatarUrl size="medium">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/medium.png?1734891477</AvatarUrl>
<AvatarUrl size="small">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/small.png?1734891477</AvatarUrl>
<AvatarUrl size="xsmall">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/xsmall.png?1734891477</AvatarUrl>
<AvatarUrl size="xxsmall">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/xxsmall.png?1734891477</AvatarUrl>
<Sponsor>UMBC Cybersecurity Institute</Sponsor>
<ThumbnailUrl size="xxlarge">https://assets3-beta.my.umbc.edu/system/shared/thumbnails/news/000/146/467/e786a1f9c39521096c70fd762406108e/xxlarge.jpg?1736288331</ThumbnailUrl>
<ThumbnailUrl size="xlarge">https://assets3-beta.my.umbc.edu/system/shared/thumbnails/news/000/146/467/e786a1f9c39521096c70fd762406108e/xlarge.jpg?1736288331</ThumbnailUrl>
<ThumbnailUrl size="large">https://assets4-beta.my.umbc.edu/system/shared/thumbnails/news/000/146/467/e786a1f9c39521096c70fd762406108e/large.jpg?1736288331</ThumbnailUrl>
<ThumbnailUrl size="medium">https://assets2-beta.my.umbc.edu/system/shared/thumbnails/news/000/146/467/e786a1f9c39521096c70fd762406108e/medium.jpg?1736288331</ThumbnailUrl>
<ThumbnailUrl size="small">https://assets4-beta.my.umbc.edu/system/shared/thumbnails/news/000/146/467/e786a1f9c39521096c70fd762406108e/small.jpg?1736288331</ThumbnailUrl>
<ThumbnailUrl size="xsmall">https://assets1-beta.my.umbc.edu/system/shared/thumbnails/news/000/146/467/e786a1f9c39521096c70fd762406108e/xsmall.jpg?1736288331</ThumbnailUrl>
<ThumbnailUrl size="xxsmall">https://assets2-beta.my.umbc.edu/system/shared/thumbnails/news/000/146/467/e786a1f9c39521096c70fd762406108e/xxsmall.jpg?1736288331</ThumbnailUrl>
<PawCount>0</PawCount>
<CommentCount>0</CommentCount>
<CommentsAllowed>true</CommentsAllowed>
<PostedAt>Tue, 07 Jan 2025 17:22:27 -0500</PostedAt>
</NewsItem>

<NewsItem contentIssues="false" id="146246" important="false" status="posted" url="https://beta.my.umbc.edu/groups/cybersecurity/posts/146246">
<Title>CodeBot '25: Can We Trust AI-Generated Code? 2/25-26</Title>
<Tagline>Workshop Feb. 25-26, 2025 in Columbia, MD and online</Tagline>
<Body>
<![CDATA[
    <div class="html-content"><div><div><strong><br></strong><h3><strong>Can We Trust AI-Generated Code?</strong></h3></div><h5><strong>Workshop sponsored by UMBC &amp; Army Research Laboratory</strong></h5><h5><span>Feb. 25-26, 2025 </span><span>UMBC Training Centers, Columbia, MD &amp; online</span></h5><br>The era of generative AI is upon us, and chatbots such as chatGPT are being used by programmers at all levels of experience to produce code.  Some generative AI systems, such as <a href="https://cloud.google.com/gemini/docs/codeassist/overview" rel="nofollow external" class="bo"><strong>Gemini Code Assist</strong></a>, specialize in code generation.  Unfortunately, AI-generated code often contains errors in the form of functionality that fails to meet specifications or vulnerabilities that can be exploited by hackers.  People have been working on program verification and secure coding for sixty years, but even so, the skill needed to find such errors is possessed by only a fraction of software engineers, and these skills are not being passed on to student programmers as they should be.<br><br>The goal of this FREE workshop is to gather and produce actionable ideas and suggestions that may be of use to the IT profession.  The workshop will consist of invited speakers, panels, and open discussion. </div><div><br></div><div><strong>We invite would-be participants to submit short position papers offering comments, observations, experiences, and suggestions that pertain to any or all of the following workshop themes:</strong><br><ol><li>What is or could be done to make AI-generated code more trustworthy, from the perspective of functionality and/or cybersecurity?</li><li>How can we do better at instilling the ideas and tools of secure development into the software profession?</li><li>Being able to produce quality code, with or without the aid of AI, seems to be related to system skills in general. How can we do better at giving students these skills before (or as) they enter the workplace?</li></ol>Position papers should limited to three pages and submitted according to this <a href="https://docs.google.com/document/d/11nr-Zy2MPObMYihN2x_v2jS7EcUkOLXm/edit?usp=sharing&amp;ouid=117342243438066964240&amp;rtpof=true&amp;sd=true" rel="nofollow external" class="bo"><strong>template</strong></a>.  The organizing committee will select several papers for live presentation at the workshop. Selection will be based on relevance to the workshop themes, technical merit, and perceived interest to the audience.  Position papers that are mere marketing pieces will not be considered, but descriptions of hardware and software solutions tying into the themes described above are welcome. Limited travel support may be available for non-local speakers. Position papers and summaries of the discussions that follow will make up the core of the workshop report.<br><br>UMBC students, both graduate or undergraduate, are welcome to submit position papers that describe their own personal experience and observations with AI-generated code in their own words.  Students may include their resumes with position papers if they wish to have their work/resume circulated to other attendees.  Domestic and international students are welcome to participate in this workshop.<br><br><strong>Important Dates:</strong><br>  Position paper submission deadline: <strong>January 7, 2025</strong><br>  Notice of acceptance: January 31, 2025<br>  Registration deadline: February 18, 2025<br>    (no registration fee, but space is limited)<br>  Workshop dates: February 25-26, 2025<br><br>The workshop will take place at <strong><a href="https://www.umbctraining.com/" rel="nofollow external" class="bo">UMBC Training Centers</a></strong>, 6996 Columbia Gateway Dr #100, Columbia, MD 21046</div><div><br></div><div><strong>REGISTER </strong>@ <a href="https://forms.gle/CipmPbbBVBLfHc728" rel="nofollow external" class="bo"><strong>https://forms.gle/CipmPbbBVBLfHc728</strong></a><br><br><strong>In-person space is limited, so register early! Based on RSVPs received, the organizing committee reserves the right to be selective in whom it selects to join the in-person meeting.</strong></div><div><br>Instructions for virtual participation will be made available prior to the workshop.<br><br><strong>Organizing Committee:</strong><br>  Prajna Bhandary, UMBC<br>  Mike De Lucia, Army Research Laboratory<br>  Richard Forno, UMBC<br>  Lindsay Gaughan, UMBC Training Centers<br>  Cynthia Matuszek, UMBC<br>  Charles Nicholas, UMBC<br>  Steve Simske, Colorado State University<br>  Larry Wagoner, Dept. of Defense<br>  Linda Kidder Yarlott, UMBC<br>  Paul Yu, Army Research Laboratory<br><br></div><div>Questions? Send email to <a href="mailto:codebot25@umbc.edu" rel="nofollow external" class="bo"><strong>codebot25@umbc.edu</strong></a></div>
    <hr><a href="https://https://cybersecurity.umbc.edu/" rel="nofollow external" class="bo"><strong>UMBC Cybersecurity Institute</strong></a></div>
]]>
</Body>
<Summary>Can We Trust AI-Generated Code?   Workshop sponsored by UMBC &amp; Army Research Laboratory  Feb. 25-26, 2025 UMBC Training Centers, Columbia, MD &amp; online  The era of generative AI is upon us,...</Summary>
<TrackingUrl>https://beta.my.umbc.edu/api/v0/pixel/news/146246/guest@my.umbc.edu/d5167d07b6a032b83194a6f4a657be2e/api/pixel</TrackingUrl>
<Tag>ai</Tag>
<Tag>code</Tag>
<Tag>programming</Tag>
<Tag>trust</Tag>
<Group token="cybersecurity">UMBC Cybersecurity Institute Group</Group>
<GroupUrl>https://beta.my.umbc.edu/groups/cybersecurity</GroupUrl>
<AvatarUrl>https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/xsmall.png?1734891477</AvatarUrl>
<AvatarUrl size="original">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/original.png?1734891477</AvatarUrl>
<AvatarUrl size="xxlarge">https://assets4-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/xxlarge.png?1734891477</AvatarUrl>
<AvatarUrl size="xlarge">https://assets4-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/xlarge.png?1734891477</AvatarUrl>
<AvatarUrl size="large">https://assets2-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/large.png?1734891477</AvatarUrl>
<AvatarUrl size="medium">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/medium.png?1734891477</AvatarUrl>
<AvatarUrl size="small">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/small.png?1734891477</AvatarUrl>
<AvatarUrl size="xsmall">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/xsmall.png?1734891477</AvatarUrl>
<AvatarUrl size="xxsmall">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/xxsmall.png?1734891477</AvatarUrl>
<Sponsor>UMBC and Army Research Laboratory</Sponsor>
<PawCount>0</PawCount>
<CommentCount>0</CommentCount>
<CommentsAllowed>true</CommentsAllowed>
<PostedAt>Thu, 12 Dec 2024 13:16:07 -0500</PostedAt>
<EditAt>Thu, 12 Dec 2024 13:18:11 -0500</EditAt>
</NewsItem>

<NewsItem contentIssues="true" id="146242" important="false" status="posted" url="https://beta.my.umbc.edu/groups/cybersecurity/posts/146242">
<Title>AI Lunchbox: Security Risk in AI/ML, 12/12</Title>
<Tagline>12:00-1:00 pm EST, Thursday, December 12, 2024</Tagline>
<Body>
<![CDATA[
    <div class="html-content"><div><span>In the AI Lunchbox session </span><span><strong>Security Risk in AI/ML</strong></span><span>, participants will learn about attacks on AI models and how to defend against them. Designed for a general audience, this presentation will teach participants how to incorporate AI security risk into their organizational strategy and AI development workflows. </span><a href="https://www.linkedin.com/in/randyabernethy/" rel="nofollow external" class="bo"><span>Randy Abernethy</span></a><span> from </span><a href="https://rx-m.com/" rel="nofollow external" class="bo"><span>RX-M, LLC</span></a><span> will be a speaker. </span><span>The session will be held online from 12 to 1 p.m. EST on December 12, 2024. Register <a href="https://c4ai.umbctraining.com/event/security-risk-in-ai-ml/" rel="nofollow external" class="bo"><strong>here</strong></a> to receive a link to the</span><span> event from the UMBC Training Centers </span><a href="https://c4ai.umbctraining.com/" rel="nofollow external" class="bo"><span>Center for Applied AI</span></a><span>.</span></div>
    <hr><a href="https://cybersecurity.umbc.edu/" rel="nofollow external" class="bo"><strong>UMBC Cybersecurity Institute</strong></a></div>
]]>
</Body>
<Summary>In the AI Lunchbox session Security Risk in AI/ML, participants will learn about attacks on AI models and how to defend against them. Designed for a general audience, this presentation will teach...</Summary>
<TrackingUrl>https://beta.my.umbc.edu/api/v0/pixel/news/146242/guest@my.umbc.edu/4dedc88a7fdb994c7c9d405ab1ebd828/api/pixel</TrackingUrl>
<Tag>ai</Tag>
<Tag>risk</Tag>
<Tag>security</Tag>
<Group token="cybersecurity">UMBC Cybersecurity Institute Group</Group>
<GroupUrl>https://beta.my.umbc.edu/groups/cybersecurity</GroupUrl>
<AvatarUrl>https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/xsmall.png?1734891477</AvatarUrl>
<AvatarUrl size="original">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/original.png?1734891477</AvatarUrl>
<AvatarUrl size="xxlarge">https://assets4-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/xxlarge.png?1734891477</AvatarUrl>
<AvatarUrl size="xlarge">https://assets4-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/xlarge.png?1734891477</AvatarUrl>
<AvatarUrl size="large">https://assets2-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/large.png?1734891477</AvatarUrl>
<AvatarUrl size="medium">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/medium.png?1734891477</AvatarUrl>
<AvatarUrl size="small">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/small.png?1734891477</AvatarUrl>
<AvatarUrl size="xsmall">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/xsmall.png?1734891477</AvatarUrl>
<AvatarUrl size="xxsmall">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/xxsmall.png?1734891477</AvatarUrl>
<Sponsor>UMBC Cybersecurity Institute Group</Sponsor>
<ThumbnailUrl size="xxlarge">https://assets1-beta.my.umbc.edu/system/shared/thumbnails/news/000/146/242/cb605bcf2c880ac784cb1a694e5940e6/xxlarge.jpg?1734019439</ThumbnailUrl>
<ThumbnailUrl size="xlarge">https://assets3-beta.my.umbc.edu/system/shared/thumbnails/news/000/146/242/cb605bcf2c880ac784cb1a694e5940e6/xlarge.jpg?1734019439</ThumbnailUrl>
<ThumbnailUrl size="large">https://assets3-beta.my.umbc.edu/system/shared/thumbnails/news/000/146/242/cb605bcf2c880ac784cb1a694e5940e6/large.jpg?1734019439</ThumbnailUrl>
<ThumbnailUrl size="medium">https://assets2-beta.my.umbc.edu/system/shared/thumbnails/news/000/146/242/cb605bcf2c880ac784cb1a694e5940e6/medium.jpg?1734019439</ThumbnailUrl>
<ThumbnailUrl size="small">https://assets3-beta.my.umbc.edu/system/shared/thumbnails/news/000/146/242/cb605bcf2c880ac784cb1a694e5940e6/small.jpg?1734019439</ThumbnailUrl>
<ThumbnailUrl size="xsmall">https://assets2-beta.my.umbc.edu/system/shared/thumbnails/news/000/146/242/cb605bcf2c880ac784cb1a694e5940e6/xsmall.jpg?1734019439</ThumbnailUrl>
<ThumbnailUrl size="xxsmall">https://assets3-beta.my.umbc.edu/system/shared/thumbnails/news/000/146/242/cb605bcf2c880ac784cb1a694e5940e6/xxsmall.jpg?1734019439</ThumbnailUrl>
<PawCount>0</PawCount>
<CommentCount>0</CommentCount>
<CommentsAllowed>true</CommentsAllowed>
<PostedAt>Thu, 12 Dec 2024 11:05:19 -0500</PostedAt>
</NewsItem>

<NewsItem contentIssues="true" id="145965" important="false" status="posted" url="https://beta.my.umbc.edu/groups/cybersecurity/posts/145965">
<Title>Talk: Privacy-Preserving Data Sharing in Intrusion Detection Systems, 12/6 online</Title>
<Tagline>12&#8211;1pm EST Friday, December 6, 2024, online</Tagline>
<Body>
<![CDATA[
    <div class="html-content"><span><h5><span><strong>UMBC Cyber Defense Lab presents</strong></span><span> </span></h5><h4><span>Privacy-Preserving Data Sharing in Intrusion Detection Systems</span></h4><h5><span><strong>Zhiyuan Chen<br></strong></span><span><strong>Professor and Chair, UMBC Information Systems Department</strong></span></h5><h5><strong><span>12:00–1pm, Friday, December 6, 2024, </span><a href="https://umbc.webex.com/meet/sherman" rel="nofollow external" class="bo"><span>online</span></a></strong></h5><div><br></div><p><span>Intrusion detection systems increasingly use machine learning methods, which require large volumes of data to be effective. Sharing such data sets will benefit the research community and industry. One obstacle to sharing such data is data privacy because network trace data or server log data often contains sensitive information, such as IP addresses. Even if IP addresses are encrypted, adversaries may still inject packets with unique patterns (e.g., with a certain packet sizes) such that they can use these packets to infer encrypted information. Another challenge arises when multiple intrusion detection systems from multiple organizations need to correlate their detected alerts to identify a larger threat, but the information they exchange may contain sensitive information such as network topology and traffic. This talk covers two approaches to address this problem. First, we propose a data anonymization approach that de-identifies network trace data. Compared to existing approaches, this approach provides stronger privacy protection and is robust to injection attacks. Second, we propose two privacy-preserving distributed alert correlation methods, one using additive secret sharing and the other using differential privacy. We also investigate tradeoffs between these two methods.</span></p><p><a href="https://userpages.umbc.edu/~zhchen/" rel="nofollow external" class="bo"><span><strong>Dr. Zhiyuan Chen</strong></span></a><span> is a Professor in the Department of Information Systems at UMBC. He received a BS and a MS from Fudan University, China, and a PhD in Computer Science from Cornell University. His research covers the areas of data science, big data, privacy preserving data mining and data management, data exploration and navigation, and semantic-based search and data integration using semantic networks, adversarial learning and its applications in cybersecurity. He has published extensively in these areas and has received funding from NSF, Department of Energy, IBM, Office of Naval Research, MITRE, and Department of Education.</span></p><p><span>Host: <a href="https://www.csee.umbc.edu/people/faculty/alan-t-sherman/" rel="nofollow external" class="bo">Alan T. Sherman</a>. Support for this event was provided in part by NSF under SFS grant DGE-1753681. The UMBC Cyber Defense Lab meets biweekly Fridays 12-1pm. All meetings are open to the public.</span></p><div><span><br></span></div></span></div>
]]>
</Body>
<Summary>UMBC Cyber Defense Lab presents   Privacy-Preserving Data Sharing in Intrusion Detection Systems  Zhiyuan Chen Professor and Chair, UMBC Information Systems Department  12:00–1pm, Friday, December...</Summary>
<TrackingUrl>https://beta.my.umbc.edu/api/v0/pixel/news/145965/guest@my.umbc.edu/e0574ed2de8fc798b31101e61e796b54/api/pixel</TrackingUrl>
<Tag>ai</Tag>
<Tag>cybersecurity</Tag>
<Tag>privacy</Tag>
<Group token="cybersecurity">UMBC Cybersecurity Institute Group</Group>
<GroupUrl>https://beta.my.umbc.edu/groups/cybersecurity</GroupUrl>
<AvatarUrl>https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/xsmall.png?1734891477</AvatarUrl>
<AvatarUrl size="original">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/original.png?1734891477</AvatarUrl>
<AvatarUrl size="xxlarge">https://assets4-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/xxlarge.png?1734891477</AvatarUrl>
<AvatarUrl size="xlarge">https://assets4-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/xlarge.png?1734891477</AvatarUrl>
<AvatarUrl size="large">https://assets2-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/large.png?1734891477</AvatarUrl>
<AvatarUrl size="medium">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/medium.png?1734891477</AvatarUrl>
<AvatarUrl size="small">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/small.png?1734891477</AvatarUrl>
<AvatarUrl size="xsmall">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/xsmall.png?1734891477</AvatarUrl>
<AvatarUrl size="xxsmall">https://assets3-beta.my.umbc.edu/system/shared/avatars/groups/000/000/485/196da6a7ec6f4c31eab2e474c17a9ab7/xxsmall.png?1734891477</AvatarUrl>
<Sponsor>UMBC Center for Cybersecurity</Sponsor>
<ThumbnailUrl size="xxlarge">https://assets1-beta.my.umbc.edu/system/shared/thumbnails/news/000/145/965/b9e9567763877786d428f0d3f7731e97/xxlarge.jpg?1732912896</ThumbnailUrl>
<ThumbnailUrl size="xlarge">https://assets1-beta.my.umbc.edu/system/shared/thumbnails/news/000/145/965/b9e9567763877786d428f0d3f7731e97/xlarge.jpg?1732912896</ThumbnailUrl>
<ThumbnailUrl size="large">https://assets2-beta.my.umbc.edu/system/shared/thumbnails/news/000/145/965/b9e9567763877786d428f0d3f7731e97/large.jpg?1732912896</ThumbnailUrl>
<ThumbnailUrl size="medium">https://assets2-beta.my.umbc.edu/system/shared/thumbnails/news/000/145/965/b9e9567763877786d428f0d3f7731e97/medium.jpg?1732912896</ThumbnailUrl>
<ThumbnailUrl size="small">https://assets4-beta.my.umbc.edu/system/shared/thumbnails/news/000/145/965/b9e9567763877786d428f0d3f7731e97/small.jpg?1732912896</ThumbnailUrl>
<ThumbnailUrl size="xsmall">https://assets1-beta.my.umbc.edu/system/shared/thumbnails/news/000/145/965/b9e9567763877786d428f0d3f7731e97/xsmall.jpg?1732912896</ThumbnailUrl>
<ThumbnailUrl size="xxsmall">https://assets1-beta.my.umbc.edu/system/shared/thumbnails/news/000/145/965/b9e9567763877786d428f0d3f7731e97/xxsmall.jpg?1732912896</ThumbnailUrl>
<PawCount>0</PawCount>
<CommentCount>0</CommentCount>
<CommentsAllowed>true</CommentsAllowed>
<PostedAt>Fri, 29 Nov 2024 17:28:15 -0500</PostedAt>
</NewsItem>

</News>
