Responsible AI

 


Responsible AI is an approach to developing and deploying artificial intelligence (AI) from both an ethical and legal point of view. The goal of responsible AI is to employ AI in a safe, trustworthy and ethical fashion. Using AI responsibly should increase transparency and help reduce issues such as AI bias.

Today, we often talk about “responsible” AI use, but what do they we really mean?

Generally speaking, being responsible means being aware of the consequences of our actions and making sure they don’t cause harm or put anyone in danger.

Microsoft outlines six key principles for responsible AI: accountability, inclusiveness, reliability and safety, fairness, transparency, and privacy and security.







Comments

Popular Posts