News
        
        Researchers Find Security Holes in Microsoft's Health AI Chatbot
        
        
        
        Microsoft's AI chatbot service for healthcare can be tricked into  giving attackers access to restricted data, according to researchers at security  firm Tenable this week.
In a blog  post Tuesday, Tenable's Jimi Sebree described two vulnerabilities in Azure  AI Health Bot that leave the service open to server-side request forgery attacks. 
Azure AI Health Bot is a service for developers to build natural-language  user interfaces tailored for healthcare scenarios. Developers can use it to build  intelligent chatbots trained on an organization's internal data and workflows,  as well as on data from outside sources. A "data connections" feature  in Azure AI Health Bot lets developers configure chatbots so they can access  external data whenever appropriate. 
A server-side request forgery attack could compel a chatbot built with Azure  AI Health Bot to access data sources it has not been permitted to access --  including those belonging to other Azure users. 
Tenable discovered the first vulnerability while testing the data  connections feature.   
"While testing these data connections to see if endpoints internal  to the service could be interacted with, Tenable researchers discovered that  many common endpoints, such as Azure's Internal Metadata Service (IMDS), were  appropriately filtered or inaccessible," Sebree said. "Upon closer  inspection, however, it was discovered that issuing redirect responses (e.g.  301/302 status codes) allowed these mitigations to be bypassed."
The researchers were ultimately given permission to access "hundreds  and hundreds of resources belonging to other customers," he indicated.
Tenable promptly shared vulnerability with the Microsoft Security Response  Center (MSRC), which has since issued patches. "As it turns out, the fix  for this issue was to simply reject redirect status codes altogether for data  connection endpoints, which eliminated this attack vector," Sebree said.
The other vulnerability involved the validation mechanism that Azure AI  Health Bot uses to establish connections with FHIR  endpoints. This one, however, was described as being less critical.
"The FHIR endpoint vector did not have the ability to influence  request headers, which limits the ability to access IMDS directly," Sebree  said. "While other service internals are accessible via this vector,  Microsoft has stated that this particular vulnerability had no cross-tenant  access."
Microsoft also released fixes for this vulnerability. Neither one has  been exploited in the wild.
Though both vulnerabilities involve an AI chatbot, Sebree emphasized  that the fault was in the architecture of Microsoft's solution, rather than in  the AI models powering it.