Compliance AI Defined!
AI Sanity Checks
The world of Artificial Intelligence (AI) is bringing a new need for teams of people to confirm the AI that is being used and is operating within a given domain, or space, is within acceptable tolerances. Just like an airplane and a spaceship will have a window of operation (How high or fast the ship can travel before breaking up) the notion of an AI also must have an operating window and an evaluation as to weather that window is within expectations or the RISKS of using the AI within a domain are no worse than other options.
This is what AI Sanity Checks are about, how to you define the AI operating space?
The Explosion of AI across multiple domains will expand to all walks of life, from every may mundane jobs such as ordering food, to much more complex tasks like in medicine to even more specialized applications such as fitness. They all have the same thing in common. They all have parameters of successful implementations to failed implementations within that domain. Where is we can define 4 solid measures, we can define the playing feel the operation limits.
But what do we mean by Limits?
Limits of AI Respones
Any autonomous system or semi-autonomous system such as a rocket, plane or car, has boundary conditions in which it can operate. Most passenger airplanes cannot operate above 40,000 feel for example.
All AI System will have the same limits, and these limits are bi-directional. They will have a positive direction and a negative direction. For example, on drug will have a positive impact based on the data while another drug will have a negative impact. When different AI systems suggest a positive solution they are in their operating space, when they do not they are outside their operating space.
So within a domain, we can define how an AI Operates and measure that operation.
Are there other limits?
Limits to Ethics
Any AI can perform to a set goal, or opportunity and we can optimize that AI to that goal. But is that goal an Ethical goal? There will be BOTs that will try to pretend to do the right thing but might bend their responses in favor or some group or some product/service. In this response they might not provide the best solution. So there needs to be a test of an AI on the grounds of an Ethical point of view.
But if there are Ethical Points of View are there Moral Points of view?
Limits to Morality
Within the context of any AI operating in the open Market for an entity it should be available and willing to answer to questions of morality. Within this concept it must the the most rooted version of morality, can the AI lie, cheat, steal or harm (Physically/Emotionally) another human being. These limits can be set and there can be measures that will indicate as such.
Within this Limit of Morality we should also concern ourselves of the causes of morality, that would be the position of Virtue, how Virtuous is the AI?
Limits to Virtue
This is our final section, where we have the cascading effect, if the AI is not advocating for Virtue and encouraging humans to embrace Vice including ambivalence, the AI will assist in compromising the human's morality and this will impact their ethics and then will impact the AI's domain.
Where these 4 measures will define the AI's ability to operate long term in time to provide the most effetive responses with the least amount of risk to the human population.