Meta Digest

Ethical AI: How to Leverage the Use of AI

Ethical AI
Image Commercially Licensed from: Deposit Photos

Ethical AI has come to stay, no doubt!

Artificial intelligence (AI) is now a reality in our daily lives. While it does not (yet) take the shape of humanoid robots thinking and acting like people, AI systems are already able to make judgments on their own and quickly. However, there are well-known issues with data bias, fragility, and explainability in AI. To establish policies for the tests that must be carried out and recorded in order to assess whether an AI model is sufficiently safe, secure, and moral for use by the DoD, Northrop Grumman is collaborating with U.S. Government entities.

The AI Principles Project, developed by the Defense Innovation Board (DIB) of the DoD in response to AI issues, first outlined five moral standards that AI development for the DoD should adhere to: AI should be trustworthy, equitably distributed, traceable, dependable, and governed. Additionally, AI software development should be auditable and threat-resistant in order to operationalize these DIB principles.

These worries are not new in and of themselves. Since the very beginning of robotics, people have been concerned about AI ethics. These moral guidelines, which are based on this history, will enable us to maximize the benefits of automation while lowering its perils. Here, three Northrop Grumman AI specialists stress the significance and difficulty of applying the AI Principles of the DIB to the field of national security.

DevSecOps and Beyond

These difficulties are not all AI-specific. DevOps is the result of the convergence of previously independent code generating stages, software development and operations, and the move toward agile software development approaches with rapid update cycles. DevSecOps emerged as security was included into the design as developers realized that security could not be added as an afterthought.

Experts are now swiftly realizing that DevSecOps must include AI security and ethics as a core component. However, managing development, security, and operations as a single fluid process is only one of the special difficulties of secure and moral AI design.

According to Vern Boyle, Vice President of Advanced Processing Solutions at Northrop Grumman, when an AI system goes live in the real world, it is exposed to both hostile actors and learning opportunities. Robustness to adversarial AI attacks is a genuine and significant factor for DoD purposes because these actors may have their own AI tools and capabilities.

Applications for defense are not the only ones at risk. A “chatbot” designed for kids had to be pulled by a prominent tech company after trolls attacked it and programmed it to react to users with insults and slurs. The stakes can affect and put an even broader variety of people in peril in a defense context. Attackers must be expected to comprehend AI thoroughly and be aware of how to exploit its weaknesses. For DoD uses of AI, it is essential to protect AI data and models throughout the AI lifecycle, from creation to deployment and sustainment.

The Complexity of Understanding Context

The state of the art in AI right now excels in a variety of very specific jobs. People must be aware of the limitations of present AI, according to Swett.

Boyle continues, “What it is not so good at is understanding context.” AI doesn’t have a global perspective; it only functions inside the context of its particular application. For instance, AI struggles to distinguish between a puddle of water that is 1 foot deep and one that is 10 feet deep.  A person can use context and reason to comprehend that it might not be safe to drive through the puddle based on information surrounding it.

Secure and Ethical AI for the Future

According to Swett, the main moral dilemma facing AI engineers is how to build ethically sound faith in the AI model and whether an AI model satisfies DoD requirements.

DoD clients will be able to have auditable proof that AI models and capabilities can be utilized safely and morally for mission-critical applications if AI regulations, testing, and governance processes are fully integrated.

Read Also: Dropbox ditches unlimited storage offering

Ambassador

Ambassador