It’s remarkable how much technology has changed the way we live and work over the last decade or two. Digital technologies, powered by the cloud, have made us smarter and more productive, transforming how we communicate, learn, shop and play. And this is just the beginning. Advances in artificial intelligence (AI) are giving rise to computing systems that can see, hear, learn and reason, creating new opportunities to improve
education and healthcare, address poverty and achieve a more sustainable future.
But these rapid technology changes also raise complex questions about the impact they will have on other aspects of society: jobs, privacy, safety, inclusiveness, and fairness. When AI augments human decision-making, how can we ensure that it treats everyone fairly, and is safe and reliable? How do we respect privacy? How can we ensure people remain accountable for systems that are becoming more intelligent and powerful?
To realize the full benefits of AI, we’ll need to work together to find answers to these questions and create systems that people trust. Ultimately, for AI to be trustworthy, we believe that it must be “human-centered” – designed in a way that augments human ingenuity and capabilities – and that its development and deployment must be guided by ethical principles that are deeply rooted in timeless values. At Microsoft, we believe that six principles should provide the foundation for the development and deployment of AI-powered solutions that will put humans at the center:
I will be having a conversation around building Trust in the Android World at Droidcon