<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" version="2.0">
  <channel>
    <title>SeaGlass Technology blog</title>
    <link>https://243085186.hs-sites-na2.com/seaglass-technology-blog</link>
    <description />
    <language>en</language>
    <pubDate>Thu, 02 Apr 2026 21:46:47 GMT</pubDate>
    <dc:date>2026-04-02T21:46:47Z</dc:date>
    <dc:language>en</dc:language>
    <item>
      <title>Practical AI Governance for Hedge Funds | SeaGlass Technology</title>
      <link>https://243085186.hs-sites-na2.com/seaglass-technology-blog/practical-ai-governance-for-hedge-funds-seaglass-technology</link>
      <description>&lt;div&gt; 
 &lt;table style="border-collapse: collapse; width: 104.427%;"&gt; 
  &lt;tbody&gt; 
   &lt;tr&gt; 
    &lt;td style="width: 62.6826%; vertical-align: top; border-color: #0C5394;" width="437"&gt; &lt;p&gt;&lt;span&gt;&lt;img width="274" height="85" src="https://243085186.hs-sites-na2.com/hs-fs/hubfs/undefined.png?width=274&amp;amp;height=85&amp;amp;name=undefined.png"&gt;&lt;/span&gt;&lt;/p&gt; &lt;p&gt;&lt;span style="color: #999999;"&gt;Executive Brief&lt;/span&gt;&lt;/p&gt; &lt;p&gt;AI Is Already Inside Your Fund.&lt;br&gt;Does Anyone Own the Risk?&lt;/p&gt; &lt;p&gt;By Robert Choynowski, Chair, HFA Cybersecurity Committee | SeaGlass Technology&lt;/p&gt; &lt;p&gt;AI is already entering fund workflows through the tools teams use every day.&lt;/p&gt; &lt;p&gt;This brief outlines the cybersecurity, compliance, and operational risks that come with unmanaged adoption.&lt;/p&gt; &lt;p&gt;It also offers practical guardrails leadership can put in place now.&lt;/p&gt; 
     &lt;table style="border-collapse: collapse;"&gt; 
      &lt;tbody&gt; 
       &lt;tr&gt; 
        &lt;td style="width: 436.8px; background-color: #0f3a63; vertical-align: top; border: 1.33333px solid #0f3a63;" width="437"&gt; &lt;p style="padding-left: 5.75px;"&gt;&lt;span style="color: #ffffff;"&gt;The real question is no longer whether firms will use AI. It is whether they will govern it before it governs them.&lt;/span&gt;&lt;/p&gt; &lt;/td&gt; 
       &lt;/tr&gt; 
      &lt;/tbody&gt; 
     &lt;/table&gt; &lt;/td&gt; 
    &lt;td style="width: 37.3174%; vertical-align: top;" width="259"&gt; &lt;p style="text-align: center;"&gt;&lt;span&gt;&lt;img width="267" height="167" src="https://243085186.hs-sites-na2.com/hs-fs/hubfs/undefined.jpeg?width=267&amp;amp;height=167&amp;amp;name=undefined.jpeg"&gt;&lt;/span&gt;&lt;/p&gt; &lt;p&gt;Prepared for readers seeking practical AI governance guidance for hedge funds and alternative investment firms.&lt;/p&gt; &lt;/td&gt; 
   &lt;/tr&gt; 
  &lt;/tbody&gt; 
 &lt;/table&gt; 
&lt;/div&gt;</description>
      <content:encoded>&lt;div&gt; 
 &lt;table style="border-collapse: collapse; width: 104.427%;"&gt; 
  &lt;tbody&gt; 
   &lt;tr&gt; 
    &lt;td style="width: 62.6826%; vertical-align: top; border-color: #0C5394;" width="437"&gt; &lt;p&gt;&lt;span&gt;&lt;img width="274" height="85" src="https://243085186.hs-sites-na2.com/hs-fs/hubfs/undefined.png?width=274&amp;amp;height=85&amp;amp;name=undefined.png"&gt;&lt;/span&gt;&lt;/p&gt; &lt;p&gt;&lt;span style="color: #999999;"&gt;Executive Brief&lt;/span&gt;&lt;/p&gt; &lt;p&gt;AI Is Already Inside Your Fund.&lt;br&gt;Does Anyone Own the Risk?&lt;/p&gt; &lt;p&gt;By Robert Choynowski, Chair, HFA Cybersecurity Committee | SeaGlass Technology&lt;/p&gt; &lt;p&gt;AI is already entering fund workflows through the tools teams use every day.&lt;/p&gt; &lt;p&gt;This brief outlines the cybersecurity, compliance, and operational risks that come with unmanaged adoption.&lt;/p&gt; &lt;p&gt;It also offers practical guardrails leadership can put in place now.&lt;/p&gt; 
     &lt;table style="border-collapse: collapse;"&gt; 
      &lt;tbody&gt; 
       &lt;tr&gt; 
        &lt;td style="width: 436.8px; background-color: #0f3a63; vertical-align: top; border: 1.33333px solid #0f3a63;" width="437"&gt; &lt;p style="padding-left: 5.75px;"&gt;&lt;span style="color: #ffffff;"&gt;The real question is no longer whether firms will use AI. It is whether they will govern it before it governs them.&lt;/span&gt;&lt;/p&gt; &lt;/td&gt; 
       &lt;/tr&gt; 
      &lt;/tbody&gt; 
     &lt;/table&gt; &lt;/td&gt; 
    &lt;td style="width: 37.3174%; vertical-align: top;" width="259"&gt; &lt;p style="text-align: center;"&gt;&lt;span&gt;&lt;img width="267" height="167" src="https://243085186.hs-sites-na2.com/hs-fs/hubfs/undefined.jpeg?width=267&amp;amp;height=167&amp;amp;name=undefined.jpeg"&gt;&lt;/span&gt;&lt;/p&gt; &lt;p&gt;Prepared for readers seeking practical AI governance guidance for hedge funds and alternative investment firms.&lt;/p&gt; &lt;/td&gt; 
   &lt;/tr&gt; 
  &lt;/tbody&gt; 
 &lt;/table&gt; 
&lt;/div&gt; 
&lt;p style="line-height: 1.25;"&gt;The conversation around artificial intelligence in the alternative investment space has changed quickly.&lt;/p&gt; 
&lt;p style="line-height: 1.25;"&gt;Not long ago, fund leaders were asking whether AI was relevant to their firms. Now the more important question is this: How much AI is already being used inside the organization, and who actually owns the risk?&lt;/p&gt; 
&lt;p style="line-height: 1.25;"&gt;At many firms, the honest answer is uncomfortable. AI is already showing up in daily workflows, but the governance around it has not kept pace.&lt;/p&gt; 
&lt;p style="line-height: 1.25;"&gt;This is not simply a technology issue. It is a cybersecurity, compliance, and operational risk issue. For hedge funds and other alternative investment firms, that matters.&lt;/p&gt; 
&lt;p style="font-weight: bold; line-height: 1;"&gt;&lt;span style="font-size: 20px; color: #0c5394;"&gt;AI adoption is happening quietly&lt;/span&gt;&lt;/p&gt; 
&lt;p style="line-height: 1.25;"&gt;Most funds did not approve some formal, enterprise-wide AI initiative. AI arrived through the tools employees were already using.&lt;/p&gt; 
&lt;p style="line-height: 1.25;"&gt;Microsoft Copilot is being introduced into M365 environments. Team members are using ChatGPT to summarize notes, draft emails, review documents, or accelerate research. Operations teams are exploring automation platforms with AI built in. Vendors are also adding AI features into products firms already rely on.&lt;/p&gt; 
&lt;p style="line-height: 1.25;"&gt;In many cases, none of this went through a formal risk review. It did not get vetted by compliance. It was not reviewed against data classification policies. And it may not have been evaluated for investor confidentiality, regulatory implications, or access control concerns.&lt;/p&gt; 
&lt;p style="line-height: 1.25;"&gt;That is what makes this so important. The issue is not just intentional AI adoption. It is unmanaged AI adoption.&lt;/p&gt; 
&lt;p style="font-size: 22px; font-weight: bold; line-height: 1;"&gt;&lt;span style="color: #073763;"&gt;Why this is a cybersecurity issue&lt;/span&gt;&lt;/p&gt; 
&lt;p style="line-height: 1.25;"&gt;AI governance often gets framed as an innovation conversation or a productivity conversation. It is both of those things, but for fund managers it is also a cybersecurity issue.&lt;/p&gt; 
&lt;p style="line-height: 1;"&gt;There are several reasons why.&lt;/p&gt; 
&lt;p style="font-weight: bold; line-height: 1;"&gt;&lt;span style="font-size: 20px; color: #3d85c6;"&gt;Sensitive data may leave the firm.&lt;/span&gt;&lt;/p&gt; 
&lt;p style="line-height: 1.25;"&gt;When employees paste internal material into public or consumer-grade AI tools, they may be exposing proprietary research, investor information, strategy documents, valuations, portfolio company details, or internal communications. Even when the risk is not obvious to the user, the exposure can be significant.&lt;/p&gt; 
&lt;p style="font-weight: bold; line-height: 1;"&gt;&lt;span style="font-size: 20px; color: #3d85c6;"&gt;AI can create false confidence.&lt;/span&gt;&lt;/p&gt; 
&lt;p style="line-height: 1.25;"&gt;Generative AI tools often produce answers that sound polished and credible, even when they are wrong. In a fund environment, that creates real risk. A flawed regulatory summary, an inaccurate memo, or an AI-assisted response built on incorrect assumptions can introduce compliance and operational exposure very quickly.&lt;/p&gt; 
&lt;p style="font-size: 20px; font-weight: bold; line-height: 1;"&gt;&lt;span style="color: #3d85c6;"&gt;Permissions problems become AI problems.&lt;/span&gt;&lt;/p&gt; 
&lt;p style="line-height: 1.25;"&gt;Enterprise AI tools do not magically create access issues. They expose the ones that already exist. If file permissions are overly broad in SharePoint, Teams, or other repositories, AI-powered search and summarization can make that problem far more visible. Employees may suddenly be able to surface information they technically had access to but never would have found otherwise.&lt;/p&gt; 
&lt;p style="font-size: 20px; font-weight: bold; line-height: 1;"&gt;&lt;span style="color: #3d85c6;"&gt;Vendor risk is evolving.&lt;/span&gt;&lt;/p&gt; 
&lt;p style="line-height: 1.25;"&gt;Many third-party platforms now include AI features, whether firms realize it or not. That means traditional vendor due diligence may no longer be enough. Firms should understand where data is processed, what is retained, whether customer content is used to train models, how security incidents are handled, and what contractual protections are in place.&lt;/p&gt; 
&lt;p style="font-weight: bold; line-height: 1;"&gt;&lt;span style="font-size: 22px; color: #073763;"&gt;The bigger issue: nobody clearly owns it&lt;/span&gt;&lt;/p&gt; 
&lt;p style="line-height: 1.25;"&gt;One of the most common problems is not that firms lack smart people. It is that AI risk often falls into a gap between departments.&lt;/p&gt; 
&lt;p style="line-height: 1.25;"&gt;IT may assume compliance is handling it. Compliance may assume IT is reviewing the tools. Legal may only get involved once a vendor contract appears. Business leaders may allow teams to experiment because the tools seem harmless and productivity gains are appealing.&lt;/p&gt; 
&lt;p style="line-height: 1;"&gt;Meanwhile, AI use keeps spreading.&lt;/p&gt; 
&lt;p style="line-height: 1.25;"&gt;That lack of ownership is the real concern. Because once a productivity tool becomes embedded in daily processes, it gets much harder to unwind.&lt;/p&gt; 
&lt;p style="font-weight: bold; line-height: 1;"&gt;&lt;span style="font-size: 20px; color: #073763;"&gt;Funds do not need to panic, but they do need guardrails&lt;/span&gt;&lt;/p&gt; 
&lt;p style="line-height: 1.25;"&gt;The goal is not to block every AI tool. That is unrealistic, and in many cases unnecessary. The goal is to put guardrails in place before adoption outruns oversight.&lt;/p&gt; 
&lt;p style="line-height: 1;"&gt;A good starting point is often much simpler than people think.&lt;/p&gt; 
&lt;p style="font-size: 20px; font-weight: bold; line-height: 1;"&gt;&lt;span style="color: #3d85c6;"&gt;1. Create an AI use policy&lt;/span&gt;&lt;/p&gt; 
&lt;p style="line-height: 1.25;"&gt;Every firm should have a basic, plain-English policy that answers a few practical questions:&lt;/p&gt; 
&lt;ul style="line-height: 1;"&gt; 
 &lt;li&gt; &lt;p&gt;What kinds of AI tools are permitted?&lt;/p&gt; &lt;/li&gt; 
 &lt;li&gt; &lt;p&gt;&lt;span style="background-color: transparent;"&gt;What kinds of firm data may never be entered into AI systems?&lt;/span&gt;&lt;/p&gt; &lt;/li&gt; 
 &lt;li&gt; &lt;p&gt;&lt;span style="background-color: transparent;"&gt;&lt;/span&gt;&lt;span style="background-color: transparent;"&gt;Which uses require approval?&lt;/span&gt;&lt;/p&gt; &lt;/li&gt; 
 &lt;li&gt; &lt;p&gt;&lt;span style="background-color: transparent;"&gt;&lt;/span&gt;&lt;span style="background-color: transparent;"&gt;Who is responsible for oversight?&lt;/span&gt;&lt;/p&gt; &lt;/li&gt; 
 &lt;li&gt; &lt;p&gt;&lt;span style="background-color: transparent;"&gt;&lt;/span&gt;&lt;span style="background-color: transparent;"&gt;What review process applies before a new AI-enabled vendor or feature is adopted?&lt;/span&gt;&lt;/p&gt; &lt;/li&gt; 
&lt;/ul&gt; 
&lt;p style="line-height: 1.25;"&gt;This does not need to be a 30-page manual. It just needs to be clear enough that employees understand the rules.&lt;/p&gt; 
&lt;p style="font-weight: bold; line-height: 1;"&gt;&lt;span style="font-size: 20px; color: #3d85c6;"&gt;2. Inventory where AI is already in use&lt;/span&gt;&lt;/p&gt; 
&lt;p style="line-height: 1.25;"&gt;Before building a future-state AI strategy, firms should understand current-state exposure.&lt;/p&gt; 
&lt;ul style="line-height: 1;"&gt; 
 &lt;li&gt; &lt;p&gt;Public AI tools employees are already using&lt;/p&gt; &lt;/li&gt; 
 &lt;li&gt; &lt;p&gt;&lt;span style="background-color: transparent;"&gt;Enterprise tools with AI features enabled&lt;/span&gt;&lt;/p&gt; &lt;/li&gt; 
 &lt;li&gt; &lt;p&gt;&lt;span style="background-color: transparent;"&gt;&lt;/span&gt;&lt;span style="background-color: transparent;"&gt;T&lt;/span&gt;&lt;span style="background-color: transparent;"&gt;h&lt;/span&gt;&lt;span style="background-color: transparent;"&gt;i&lt;/span&gt;&lt;span style="background-color: transparent;"&gt;rd-party vendors that process or analyze firm data using AI&lt;/span&gt;&lt;/p&gt; &lt;/li&gt; 
 &lt;li&gt; &lt;p&gt;&lt;span style="background-color: transparent;"&gt;&lt;/span&gt;&lt;span style="background-color: transparent;"&gt;I&lt;/span&gt;&lt;span style="background-color: transparent;"&gt;n&lt;/span&gt;&lt;span style="background-color: transparent;"&gt;t&lt;/span&gt;&lt;span style="background-color: transparent;"&gt;e&lt;/span&gt;&lt;span style="background-color: transparent;"&gt;rnal automation workflows with AI components&lt;/span&gt;&lt;/p&gt; &lt;/li&gt; 
&lt;/ul&gt; 
&lt;p style="line-height: 1;"&gt;You cannot govern what you have not identified.&lt;/p&gt; 
&lt;p style="font-weight: bold; font-size: 20px; line-height: 1;"&gt;&lt;span style="color: #3d85c6;"&gt;3. Revisit data classification and permissions&lt;/span&gt;&lt;/p&gt; 
&lt;p style="line-height: 1;"&gt;AI governance is tightly connected to basic cyber hygiene.&lt;/p&gt; 
&lt;p style="line-height: 1.25;"&gt;If firms do not know where sensitive data lives, who has access to it, and whether permissions are properly scoped, AI can amplify those weaknesses. In many environments, the quickest win is not an AI tool decision at all. It is cleaning up access controls, data sprawl, and file-sharing practices.&lt;/p&gt; 
&lt;p style="font-size: 20px; font-weight: bold; line-height: 1;"&gt;&lt;span style="color: #3d85c6;"&gt;4. Update vendor due diligence&lt;/span&gt;&lt;/p&gt; 
&lt;p style="line-height: 1;"&gt;Vendor reviews should now include AI-specific questions. For example:&lt;/p&gt; 
&lt;ul style="line-height: 1;"&gt; 
 &lt;li&gt; &lt;p&gt;Does the vendor use customer data to train models?&lt;/p&gt; &lt;/li&gt; 
 &lt;li&gt; &lt;p&gt;&lt;span style="background-color: transparent;"&gt;Where is the data stored and processed?&lt;/span&gt;&lt;/p&gt; &lt;/li&gt; 
 &lt;li&gt; &lt;p&gt;&lt;span style="background-color: transparent;"&gt;&lt;/span&gt;&lt;span style="background-color: transparent;"&gt;I&lt;/span&gt;&lt;span style="background-color: transparent;"&gt;s&lt;/span&gt;&lt;span style="background-color: transparent;"&gt; &lt;/span&gt;&lt;span style="background-color: transparent;"&gt;d&lt;/span&gt;&lt;span style="background-color: transparent;"&gt;ata retained after prompts or tasks are completed?&lt;/span&gt;&lt;/p&gt; &lt;/li&gt; 
 &lt;li&gt; &lt;p&gt;&lt;span style="background-color: transparent;"&gt;&lt;/span&gt;&lt;span style="background-color: transparent;"&gt;C&lt;/span&gt;&lt;span style="background-color: transparent;"&gt;a&lt;/span&gt;&lt;span style="background-color: transparent;"&gt;n AI functionality be disabled if needed?&lt;/span&gt;&lt;/p&gt; &lt;/li&gt; 
 &lt;li&gt; &lt;p&gt;&lt;span style="background-color: transparent;"&gt;&lt;/span&gt;&lt;span style="background-color: transparent;"&gt;W&lt;/span&gt;&lt;span style="background-color: transparent;"&gt;h&lt;/span&gt;&lt;span style="background-color: transparent;"&gt;at transparency does the firm have into model behavior and outputs?&lt;/span&gt;&lt;/p&gt; &lt;/li&gt; 
 &lt;li&gt; &lt;p style="line-height: 1.25;"&gt;&lt;span style="background-color: transparent;"&gt;&lt;/span&gt;&lt;span style="background-color: transparent;"&gt;W&lt;/span&gt;&lt;span style="background-color: transparent;"&gt;h&lt;/span&gt;&lt;span style="background-color: transparent;"&gt;at does the vendor's breach notification and incident response process look like when AI systems are involved?&lt;/span&gt;&lt;/p&gt; &lt;/li&gt; 
&lt;/ul&gt; 
&lt;p style="line-height: 1;"&gt;These are no longer niche questions.&lt;/p&gt; 
&lt;p style="font-weight: bold; font-size: 20px; line-height: 1;"&gt;&lt;span style="color: #3d85c6;"&gt;5. Train employees before a problem occurs&lt;/span&gt;&lt;/p&gt; 
&lt;p style="line-height: 1;"&gt;Policies alone are not enough. Employees need examples they can relate to.&lt;/p&gt; 
&lt;p style="line-height: 1.25;"&gt;Most people are not trying to create risk. They are just trying to save time. Training should focus on real-world fund scenarios such as:&lt;/p&gt; 
&lt;ul style="line-height: 1;"&gt; 
 &lt;li&gt; &lt;p&gt;summarizing internal meeting notes&lt;/p&gt; &lt;/li&gt; 
 &lt;li&gt; &lt;p&gt;&lt;span style="background-color: transparent;"&gt;drafting investor communications&lt;/span&gt;&lt;/p&gt; &lt;/li&gt; 
 &lt;li&gt; &lt;p&gt;&lt;span style="background-color: transparent;"&gt;&lt;/span&gt;&lt;span style="background-color: transparent;"&gt;u&lt;/span&gt;&lt;span style="background-color: transparent;"&gt;s&lt;/span&gt;&lt;span style="background-color: transparent;"&gt;i&lt;/span&gt;&lt;span style="background-color: transparent;"&gt;ng AI to review agreements or policies&lt;/span&gt;&lt;/p&gt; &lt;/li&gt; 
 &lt;li&gt; &lt;p&gt;&lt;span style="background-color: transparent;"&gt;&lt;/span&gt;&lt;span style="background-color: transparent;"&gt;a&lt;/span&gt;&lt;span style="background-color: transparent;"&gt;n&lt;/span&gt;&lt;span style="background-color: transparent;"&gt;a&lt;/span&gt;&lt;span style="background-color: transparent;"&gt;lyzing spreadsheets or portfolio information&lt;/span&gt;&lt;/p&gt; &lt;/li&gt; 
 &lt;li&gt; &lt;p&gt;&lt;span style="background-color: transparent;"&gt;&lt;/span&gt;&lt;span style="background-color: transparent;"&gt;r&lt;/span&gt;&lt;span style="background-color: transparent;"&gt;e&lt;/span&gt;&lt;span style="background-color: transparent;"&gt;s&lt;/span&gt;&lt;span style="background-color: transparent;"&gt;earching compliance questions&lt;/span&gt;&lt;/p&gt; &lt;/li&gt; 
&lt;/ul&gt; 
&lt;p style="line-height: 1;"&gt;The more practical the training, the more likely it is to work.&lt;/p&gt; 
&lt;p style="font-weight: bold; font-size: 22px; line-height: 1;"&gt;&lt;span style="color: #073763;"&gt;What leadership should be asking now&lt;/span&gt;&lt;/p&gt; 
&lt;p style="line-height: 1.25;"&gt;For fund executives, COOs, CFOs, CTOs, CISOs, and compliance leaders, this is the moment to ask a few direct questions:&lt;/p&gt; 
&lt;ul style="line-height: 1;"&gt; 
 &lt;li&gt; &lt;p&gt;Do we know which AI tools are currently in use across the firm?&lt;/p&gt; &lt;/li&gt; 
 &lt;li&gt; &lt;p&gt;&lt;span style="background-color: transparent;"&gt;Has anyone defined what data can and cannot be used with those tools?&lt;/span&gt;&lt;/p&gt; &lt;/li&gt; 
 &lt;li&gt; &lt;p&gt;&lt;span style="background-color: transparent;"&gt;&lt;/span&gt;&lt;span style="background-color: transparent;"&gt;A&lt;/span&gt;&lt;span style="background-color: transparent;"&gt;r&lt;/span&gt;&lt;span style="background-color: transparent;"&gt;e&lt;/span&gt;&lt;span style="background-color: transparent;"&gt; our M365 and collaboration permissions actually in good shape?&lt;/span&gt;&lt;/p&gt; &lt;/li&gt; 
&lt;/ul&gt; 
&lt;ul style="line-height: 1;"&gt; 
 &lt;li&gt; &lt;p&gt;Have our vendors disclosed where AI is embedded in their offerings?&lt;/p&gt; &lt;/li&gt; 
 &lt;li&gt; &lt;p&gt;Who owns AI governance here, in practice, not just in theory?&lt;/p&gt; &lt;/li&gt; 
&lt;/ul&gt; 
&lt;p style="font-size: 22px; font-weight: bold; line-height: 1;"&gt;&lt;span style="color: #073763;"&gt;The firms that handle this well will have an advantage&lt;/span&gt;&lt;/p&gt; 
&lt;p style="line-height: 1;"&gt;There is a tendency to frame AI risk and AI opportunity as opposing forces. They are not.&lt;/p&gt; 
&lt;p style="line-height: 1.25;"&gt;The firms that put sensible guardrails in place early will be in a far better position to benefit from AI responsibly. They will be able to move faster with more confidence, reduce unnecessary exposure, and demonstrate to investors, regulators, and internal stakeholders that adoption is being handled thoughtfully.&lt;/p&gt; 
&lt;p style="line-height: 1;"&gt;AI is already inside the fund environment, whether leadership intended it or not.&lt;/p&gt; 
&lt;p style="line-height: 1.25;"&gt;The real question now is not whether firms will use AI. It is whether they will govern it before it governs them.&lt;/p&gt; 
&lt;div&gt; 
 &lt;table style="border-collapse: collapse; width: 99.8698%;"&gt; 
  &lt;tbody&gt; 
   &lt;tr&gt; 
    &lt;td style="width: 100%; background-color: #eef4fa; vertical-align: top; border: 1.33333px solid #c8d9ec;" width="672"&gt; &lt;p style="margin-right: 0px; line-height: 1.25;"&gt;About the Author&lt;/p&gt; &lt;p style="margin-right: 0px; line-height: 1.25;"&gt;Robert Choynowski is Chair of the HFA Cybersecurity Committee and a member of the HFA Global Board of Directors. He is also a founder and Chief Visionary of SeaGlass Technology, which supports hedge funds and alternative investment firms with managed IT, cybersecurity, and strategic technology leadership.&lt;/p&gt; &lt;p style="line-height: 1.25;"&gt;Contact&lt;/p&gt; &lt;p style="line-height: 1.25;"&gt;If you'd like to compare notes on AI governance, cybersecurity, or technology risk within your firm, Robert can be reached at robert@seaglasstechnology.com or through the HFA member network.&lt;/p&gt; &lt;/td&gt; 
   &lt;/tr&gt; 
  &lt;/tbody&gt; 
 &lt;/table&gt; 
&lt;/div&gt;  
&lt;img src="https://track-na2.hubspot.com/__ptq.gif?a=243085186&amp;amp;k=14&amp;amp;r=https%3A%2F%2F243085186.hs-sites-na2.com%2Fseaglass-technology-blog%2Fpractical-ai-governance-for-hedge-funds-seaglass-technology&amp;amp;bu=https%253A%252F%252F243085186.hs-sites-na2.com%252Fseaglass-technology-blog&amp;amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "&gt;</content:encoded>
      <pubDate>Thu, 02 Apr 2026 21:46:47 GMT</pubDate>
      <author>robert@seaglasstechnology.com (Robert Choynowski)</author>
      <guid>https://243085186.hs-sites-na2.com/seaglass-technology-blog/practical-ai-governance-for-hedge-funds-seaglass-technology</guid>
      <dc:date>2026-04-02T21:46:47Z</dc:date>
    </item>
  </channel>
</rss>
