Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More
The last two days have been busy ones at Redmond. Yesterday, Microsoft announced its new Azure OpenAI Service for government. Today, the tech giant unveiled a new set of three commitments to its customers as they seek to integrate generative AI into their organizations safely, responsibly and securely.
Each represents a move forward in Microsoft’s journey toward mainstreaming AI and assuring its business customers that its AI solutions and approach are trustworthy.
Generative AI for government agencies of all levels
Those working in government agencies and civil services at the local, state and federal levels are often beset by more data than they know what to do with — data on constituents, contractors and initiatives, for example.
Generative AI, then, would seem to pose a tremendous opportunity: giving government workers the capability to sift through their vast quantities of data more rapidly and using natural language queries and commands, as opposed to clunkier, older methods of data retrieval and information lookup.
Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.
However, government agencies typically have very strict requirements concerning the technology they can apply to their data and tasks. Enter Microsoft Azure Government, which already works with the U.S. Defense Department, Energy Department and NASA, as Bloomberg noted when it broke the news of the new Azure OpenAI Services for Government.
“For government customers, Microsoft has developed a new architecture that enables government agencies to securely access the large language models in the commercial environment from Azure Government allowing those users to maintain the stringent security requirements necessary for government cloud operations,” wrote Bill Chappell, Microsoft’s chief technology officer of strategic missions and technologies, in a blog post announcing the new tools.
Specifically, the company unveiled Azure OpenAI Service REST APIs, which allow government customers to build new applications or connect existing ones to OpenAI’s GPT-4, GPT-3, and Embeddings — but not over the public internet. Rather, Microsoft enables government clients to connect to OpenAI’s APIs securely over its encrypted, transport-layer security (TLS) “Azure Backbone.”
“This traffic stays entirely within the Microsoft global network backbone and never enters the public internet,” the blog post specifies, later stating: “Your data is never used to train the OpenAI model (your data is your data).”
New commitments to customers
On Thursday, Microsoft unveiled three commitments to its all of its customers concerning how the company will approach its development of generative AI products and services:
- Sharing its learnings about developing and deploying AI responsibly
- Creating an AI assurance program
- Supporting customers as they implement their own AI systems responsibly
As part of the first commitment, Microsoft said it will publish a number of key documents. These include including a Responsible AI Standard, AI Impact Assessment Template, AI Impact Assessment Guide, Transparency Notes, and detailed primers on responsible AI implementation. Additionally, Microsoft will share the curriculum used to train its own employees on responsible AI practices.
The second commitment focuses on the creation of an AI Assurance Program. This program will help customers ensure that the AI applications they deploy on Microsoft’s platforms comply with legal and regulatory requirements for responsible AI. It will include elements such as regulator engagement support, implementation of the AI Risk Management Framework published by the U.S. National Institute of Standards and Technology (NIST), customer councils for feedback, and regulatory advocacy.
Last, Microsoft will provide support for customers as they implement their own AI systems responsibly. The company plans to establish a dedicated team of AI legal and regulatory experts in different regions of the world to assist businesses in implementing responsible AI governance systems. Microsoft will also collaborate with partners, such as PwC and EY, to leverage their expertise and support customers in deploying their own responsible AI systems.
The broader context swirling around Microsoft and AI
While these commitments mark the beginning of Microsoft’s efforts to promote responsible AI use, the company acknowledges that ongoing adaptation and improvement will be necessary as technology and regulatory landscapes evolve.
The move by Microsoft comes in response to the concerns surrounding the potential misuse of AI and the need for responsible AI practices, including recent letters by U.S. lawmakers questioning Meta Platforms’ founder and CEO Mark Zuckerberg over the company’s release of its LLaMA LLM, which experts say could have a chilling effect on development of open-source AI.
The news also comes on the heels of Microsoft’s annual Build conference for software developers, where the company unveiled Fabric, its new data analytics platform for cloud users that seeks to put Microsoft ahead of Google’s and Amazon’s cloud analytics offerings.
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.