AI Ethical Principals in Practice

Chapter 6.  AI Principals in Practice

Editors Note: This post is a part of a series on Ethics of Artificial Intelligence.

So, assuming a company agrees that it wants to implement a technology that leverages AI and decides it wants to do it ethically, for good, what’s next?  How do we as practitioners of AI engage and do more?  Let’s break it down into stages for putting it into practice.  Imagine you are charting a course to implementing an AI solution in your business, realizing that some of these principals could be used beyond that, let’s focus for a minute.  You’d like to implement a demand planning solution, or a customer experience optimizer, or a medical data analyzer and realize that in addition to planning for the solution itself, you’d like to understand the ethical impact of the solution.  Here are the steps:

  1. Mission & Impact.  What is the mission of your AI initiative in impact-centric terms?  The mission of your initiative should follow a hypothesis driven approach that articulates an outcome.  That outcome needs to have real objectives that are measurable. Also evaluate the basic outcome of the mission… is it ethical? Is it the right thing to do?

  2. People Impact and Cx.  Capture and articulate the impact to the customers, partners, and employees that interact or are part of your firm.  How does the AI initiative change the life of your stakeholders and how does it drive the outcome?  How does the impact on the people become an end rather than just a means?  The human stakeholders of your system need to be planned as importantly as the technical output and business output. 

  3. Team for the Mission.  Understand the team necessary to strategize, plan, and execute on your mission in the most complete way.  What roles are necessary for success and who needs to perform them.  Get this into industry-specific, business-specific terms and map to the who, what, why, and how.   Have you built a diverse set of talent from a variety of cultural backgrounds that will inform the system’s usage?

  4. Data Architecture.  What is the overall architecture of the system in respect to the tooling, data storage, and processing?  How does that architecture relate to maintaining privacy and integrity of data at all stages?  The architecture needs to be documented, implemented in a maintainable way, and built to mitigate, use-case specific, data loss and security vulnerabilities (2).  The implementation of ethics in a data architecture means being accountable for data that is being entrusted to you, whether from loss, theft, or misuse.

  5. Model Definition.  The “work” of data science is essentially around proving a hypothesis through a model.  Plan, define, and iterate on the model to solve the problem.  Perform the activity of driving toward a proof for the hypothesis and an initial ability to engage in pilot workstreams.  In this way, also leverage previous learnings and academic research, especially where it provides insights into model gaps, ethical mis-steps due to inputs, or usage guidance.

  6. Examine the Inputs.  What are the points of data used to accomplish the mission?  Understand what the data inputs are, where they come from, and where there is potential for error, bias, dependency on error ridden data, or tampering.  Understand the relationship between the incoming data and what it might do to a model and the related mission that it will impact.

  7. Define the Metrics that = Healthy.  What happens if certain data inputs or outputs become unhealthy and what determines health?  How do we measure those metrics to ensure we understand what the system is “thinking” and how it is measuring the input/output.  Think of this not just in business impact but in human impact.  How do we know if the human impact is inappropriate.  For instance, if you were constructing an AI system that drove Cx usage of a mobile app… how do we know if usage is out of bounds and indicates mis-use?  Define the metrics for guard rails to be implemented on.

  8. Understand the limitations.  Understand what the model does NOT do.  This realization needs to be captured and understood so the ethical impact of those limitations can be captured.  For instance, does the model for the bot appropriately route emergency situations, where an “escape” vector needs to be defined to a human intercept.  Does the model understand certain intangibles it needs to be told, or simply understood it can’t handle and is provided clearly to the internal or external customer. 

  9. Define the guard rails.  Leverage the information captured in metrics and limitations to define guard rails for the model in respect to its successful operation and interaction with people.  For example, if a loan qualification AI starts doing race-based loan qualification based on information learned from biased data, the system should be built to enact guard rails that captures, triages, and takes corrective actions, potentially with human intervention (particularly in a critical situation), to address the outcomes.  Design the system FOR the gaps that can exist and for dealing with ambiguity vs. assume the outputs are good and that you will always pull them into the end system.  In some cases you need to build a model that doesn’t apply its results because it ran into the guard rail.

  10. Understand the outputs.  The benefit of AI systems is they can be plugged into a variety of downstream activities and systems.  The downside is the same, with the need for the team to ensure the outputs are understandable, paired with the output limitations based on incoming data, bias management, and guard rails.  How does a downstream application understand how to interact with your data output and leverage it correctly?  This needs to be modeled against opportunities and checks for fairness, ethics prior that would not be caught by the earlier guard rails but might be more far reaching.

  11. Provide validation to outputs.  Build a way for outputs to be validated against a set of criteria to ensure the exist within a set of bounds, potentially in the context of the linked application.  The model might be built to provide data X and might be correct as far as the model is concerned, but the application may be using that data in a way unexpected by the model designer and further guard rails may be built into the application itself.  These aren’t just application health protections, but might be protections tied to the “intent” of the system.  For example, the model has “learned” to interact with customers with slang, but further downstream the system might catch this type of interaction based on rules for all interactions.   Further, you may follow the guidelines for human-AI interaction and build visibility into the customer interaction to ensure a human knows it is not interacting with another human, but with a machine. Finally, you need to build ML ops into your approach, with an ability to monitor for the health of the AI model and know if it is working.

  12. Plan for training of impacted workers and opportunities.  In some cases the AI system may change the characteristics of jobs, or even eliminate jobs that previously performed certain functions.  The reverse side is that AI stands to increase the opportunity and earning potential of those individuals if they can rise with the AI system and perform complementary functions.  A business should identify early in the lifecycle how they can point individuals impacted in a direction of self-improvement to be part of the long term picture.  If they do, they will easily pay for themselves in model improvement, advisory, and defining the next mission, leading to value down the road.  We know that businesses are better when we help individuals become the best versions of themselves… this is an opportunity to do that.

  13. Product, not Project Thinking.  The system you’ve built is not perfect.  This is the opportunity to identify where you look for ways to mitigate ethical or functional concerns where the model is missing them.  It is tremendously unlikely that you will identify all of these gaps initially and your mentality needs to shift from project to product thinking.  Plan that the system will miss, have gaps, need adjustments and plan to address through agile sprints to keep making the system better.  In the same way, implement guard rails early to test and iterate on features vs. “big bang” release approaches which have the potential of highlighting functional or ethical issues on day 1.  Instead, built it like a product you intend to stick, but want to get right.

  14. Continuous Improvement & Feedback.  In a product mindset you build the mechanism for feedback into the interface and system.  Provide a way for self-reporting of ethical and functional issues by customers or internal stakeholders.  The availability of a feedback mechanism that evenly and attentively addresses feedback will mitigate many problems later in the implementation.  In this sense, plan for failure and understand how you will react to it.  Plan that the system will encounter ethical problems… how will you engage, resolve, and communicate?

  15. Product Transparency.  The release of the system to customers or internal stakeholders should require clear product transparency.  In cases of direct human-machine interaction it means identification that the system is in-fact a machine.  In cases of where the AI is engaging in optimization of improvement it means clarity with human partners on how the tool works and how they leverage it to engage in a better future.  The intent, use, and existence of the product’s AI features should be clear and in some cases terms provided to the user. 

  16. Inclusive Design Qualities.  The build of an inclusive AI system means you are building a system for everybody, meaning the way it communicates, the language it uses, the accessibility for various types of input, and the opportunity to leverage many points of view.  Ethics in this case is about maximizing who can use a system and that people are not “left out” when they can capably contribute or participate within the innovation.

  17. Maintain Human Expertise.  Even after the creation of a system with excellent results, maintaining relevant expertise that understands the functions of the system and how to work with it is important.  Don’t “set it and forget it” hoping continuous improvement, understanding, or control won’t be necessary. 

  18. Define Human Review Boards.  Maintain a list of AI impacted systems within the enterprise and have a model of how to maintain ethical management of the use and impact to the community.  The impact of AI models respectively can grow and change from their original purpose and the presence of a group of people who care about the impact to the business and the related community is critical to implementing an ethical AI environment.

The practical steps listed exist at various parts of the AI project lifecycle, but if done will not only increase the ethical nature of the system that is built, but will increase the effectiveness.  The practices that make an AI system ethical are many of the same practices that make it ethical, where controls are leveraged to improve and enhance. Remember… this is important. We can make a difference if this is part of every project.

Nathan Lasnoski

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s