“AI algorithms may be flawed,” the company acknowledged. “Datasets may be insufficient or contain biased information. Inappropriate or controversial data practices by Microsoft or others could impair the acceptance of AI solutions. These deficiencies could undermine the decisions, predictions, or analysis AI applications produce, subjecting us to competitive harm, legal liability, and brand or reputational harm.”
There’s moreover utilizing AI by the navy. In October, an undisclosed number of Microsoft employees wrote an open letter to the company expressing concern a couple of $10 billion enterprise to develop cloud corporations for the Department of Defense. In the publish, the employees requested with reference to the “violent application” of AI know-how and the extent of transparency the company would provide to these creating it.
“How will workers, who build and maintain these services in the first place, know whether our work is being used to aid profiling, surveillance, or killing?” the publish acknowledged.
These are all important questions for Microsoft after the company added AI to its strategic imaginative and prescient in 2017, formally making it a primary priority.
At the weekly AI 365 conferences, Nadella and Scott are joined by Chief Financial Officer Amy Hood and completely different prime executives. Scott acknowledged the conferences are needed so that enterprise leaders are on the equivalent net web page and have a clear sense of the place projects is also overlapping. They’re moreover useful for allowing a gaggle that’s seeing sturdy outcomes from a specific methodology to make clear it so that the model will likely be replicated for various projects.
“You look at something like machine learning where, especially on the frontier, there’s a small number of people who really have that frontier-pushing expertise and drive, and you really, really don’t want to waste their effort,” Scott acknowledged.
WATCH: Microsoft Research: Intend to make AI accessible and inclusive