Skip to main content

Exploring AI and Ethics

 

        Exploring AI and Ethics


Welcome to exploring AI and ethics. Here you will learn about what AI ethics is, and why it matters. You will find out, what makes AI ethics a socio-technical challenge, what it means to build and use AI ethically, and how organizations can put AI ethics into action. AI, Artificial Intelligence, is a very pervasive in everybody's life. Even if often we don't realize it, we use it when we use a credit card to buy something online, when we search something on a web, when we post or like or follow somebody on a social platform. And even when we drive with the navigation support, and the driver assistance capabilities of the car based on AI. This pervasiveness generates very fast a significant transformation in our life, and also in the structure and equilibrium of our society. This is why AI besides being a technical and scientific discipline, has also a very significant social impact. This raises a lot of ethical questions about how AI should be designed, developed, deployed, used, and also regulated. The social technical dimension of AI requires efforts to identify all stakeholders, that go well beyond technical experts, and include also sociologists, philosophers, economists, policymakers. And all the communities that are impacted by the deployment of this technology. Inclusiveness is necessary in defining the ecosystem, in all the phases of the AI development and deployment, and also in the impact of AI in the deployment scenario. Without it, we risk of creating AI only for some, and leave many others behind in a disadvantaged position. 



Everybody needs to be involved in defining the vision of the future that we want to build using AI and other technology as a means and not as an end. To achieve this, appropriate guidelines are necessary to drive the creation and use of AI in the right direction. Technical tools are necessary and useful, but they should be complemented by principles guardrails, well-defined processes, and effective governance. We should not think that all these slows down innovation. Think about traffic rules, it may seem that traffic lights, precedents, and stop signals, and speed limits are slowing us down. However, without them, we will not drive faster, but actually we would drive much slower, because we would be always in a complete state of uncertainty about other vehicles and pedestrians. AI ethics identifies and addresses the socio-technical issues raised by this technology and makes sure that the right kind of innovation is supported, and facilitated, so that the path to the future we want is faster. 


As IBM CEO States, "Trust is our license to operate." We've earned this trust through our policies, programs, partnerships and advocacy of the responsible use of technology. For over 100 years, IBM has been at the forefront of innovation that brings benefits to our clients and society. This approach most definitely applies to the development, use, and deployment of AI. Therefore, ethics should be embedded into the lifecycle of the design and development process. Ethical decision making is not just a technical problem-solving approach. Rather an ethical, sociological, technical and human centered approach, should be embarked upon, based on principles, value standards, laws and benefits to society. So having this foundation is important, even necessary, but where to start? A good place to start is with a set of guiding principles, at IBM we call our principles, the principles of trust and transparency. of which there are three. The purpose of AI is to augment not replace human intelligence. Data and insights belong to their creator and new technology including AI systems must be transparent and explainable. This last principle is built upon our pillars, of which there are five. We just mentioned transparency which reinforces trust, by sharing the what, and the how, the AI is being used for. It must be explainable and also fair. 


So when it's properly calibrated, it can assist in making better choices. Should be robust, which means it should be secure, and as well as privacy preserving, safeguarding privacy and rights. We know having principles and pillars are not enough. We have an extensive set of tools and talented practitioners that can help diagnose, monitor, promote all of our pillars and continuous monitoring, to mitigate against drift and unintended consequences. The first step to putting AI ethics into action, just like with anything else, is about building understanding and awareness. This is about equipping your teams to think about AI ethics, what it means to put it into action, whatever solution you're building and deploying. Let's take an example, if you're building a learning solution and deploying that within a company, your HR team leader who is doing that should be thinking about, is this solution designed with users in mind? Have you co-created the solution with users? How does it enable equal access to opportunity to all the employees across diverse groups. A keen understanding of AI ethics, and reflecting on these issues continuously, is critical as a foundation to putting AI ethics into action. The second step in putting AI ethics into action, once you build that understanding and awareness and everybody is reflecting on this topic, is to put in place in a governance structure. And the critical point here is, it's a governance structure to scale AI ethics in action. It's not about doing it in one isolated instance in a market, or in a business unit, it's about a governance structure that works at scale. We talked about understanding and awareness as a foundation, second governance, which is the responsibility of leaders to put in place structures. Once you've got these two elements, the third step is operationaIizing. 


How do you make sure a developer, or a data scientist, or a vendor, who's in Malaysia or Poland knows how to put AI ethics into action? What does it mean for them? Right? It's one thing to put structures in place at the global level, but how do you make sure it's operationalized at scale in the markets and every user, every data scientist, every developer knows what they need to do? This is all about having clarity of the pillars of trustworthy AI for IBM, it is transparency. Let's go back to our learning example, are you designing it with users? Think about, what we think of as best in class, transparent recommendation systems, your favorite movie streaming service, or your cab hailing service? It's transparent, is it explainable? Is it telling you what recommendations are, and why they're being made, but also telling you as a user, it's your choice to make the final decision? Fairness, is it giving equal access to opportunity to everyone by ensuring adoption, not just of the process, but also the outcome across different groups. Robustness, privacy, every data scientist and developer and every vendor needs to know, what we mean by these in a very operational manner.



Avinash C. Pillai

Technology Director

syniverse® 

The world’s most connected company™ 

Website / Twitter / LinkedIn/ connected company™  


Comments

Popular posts from this blog

Defining AI Ethics

                           Defining AI Ethics Welcome to Defining AI Ethics. Humans rely on culturally agreed-upon morals and standards of action — or ethics — to guide their decision-making, especially for decisions that impact others. As AI is increasingly used to automate and augment decision-making, it is critical that AI is built with ethics at the core so its outcomes align with human ethics and expectations. AI ethics is a multidisciplinary field that investigates how to maximize AI's beneficial impacts while reducing risks and adverse impacts. It explores issues like data responsibility and privacy, inclusion, moral agency, value alignment, accountability, and technology misuse …to understand how to build and use AI in ways that align with human ethics and expectations.  There are five pillars for AI ethics: explainability, fairness, robustness, transparency, and privacy.  These pillars are focus areas that help us take action to build and use AI ethically.  Explainability 

Seven Personal Qualities Found In A Good Leader

Whether in fact a person is born a leader or develops skills and abilities to become a leader is open for debate. There are some clear characteristics that are found in good leaders. These qualities can be developed or may be naturally part of their personality. Let us explore them further. Seven Personal Qualities Found In A Good Leader: 1. A good leader has an exemplary character. It is of utmost importance that a leader is trustworthy to lead others. A leader needs to be trusted and be known to live their life with honestly and integrity. A good leader “walks the talk” and in doing so earns the right to have responsibility for others. True authority is born from respect for the good character and trustworthiness of the person who leads.   2.A good leader is enthusiastic about their work or cause and also about their role as leader. People will respond more openly to a person of passion and dedication. Leaders need to be able to be a source of inspiration, and be a

The evolution and future of AI

                  The evolution and future of AI The original AI researchers were very interested in games because they were extremely complex. Huge numbers of possible positions and gains were available, yet they're simple in a certain way. They're simple in that the moves are well-defined, the goals are well-defined. So you don't have to solve everything all at once. With chess in particular, in the work on Deep Blue at IBM, what became apparent, what computers could do on our problem like that was bringing a massive amount of compute resource to do deeper searches, to investigate more options of moves in chess than was previously possible. Watson defeating jeopardy. So this was another crossover point, in the development of AI and cognitive computing. That the questions that IBM was able to answer with jeopardy were questions that weren't simply looking up in the database, and finding the answer somewhere. Rather it required information retrieval over lots of differe