Google's TCAV will Protect AI and ML Models from Bias


Google CEO Sundar Pichai said that the company is working very hard to make it's Artificial Intelligence and Machine Learning Model more transparent as a way to protect from bias.

Sundar Pichai underlined a bevy of Artificial Intelligence Enhancement and stepped up to install more Machine Learning models on the devices. But the key point for developers and data scientists may be something called TCAV. It means Testing with Concept Activation Vectors. In essence, TCAV is an explanatory method to understand what signals your neural network models use for prediction.

In practice, TCAV's ability to understand signals could surface the bias because it would highlight whether males were a signal over females and surface other issues such as race, income, and location. Using TCAV, computer scientists can see how valuable the concepts of high value are.

For Google, transparency matters because of its technology like duplex and next-generation Google assistant. These tools will work faster for you and it will save your lot of time. Transparency of the models can mean more trust and usage of Google technology. 

TCAV will try to analyze the model and describes why a model is taking a definite decision. For example, a model that recognizes a zebra may identify it using high-level concepts. 

Sundar Pichai said that Google's AI team is working on TCAV, a technology that will permit the model to use high-level concepts. The goal of TCAV is to portray those variables that underpin a model. He further added that There is more to do, but we are committed to building Artificial Intelligence in a way that works for everyone.

For more latest and trending news on artificial intelligence stay tuned with us at LunaticAI.

Related:


1. Google has Launched an AI Virtual Assistant


2. PoemPortrait: Google's New AI Project

3. Why Google AI Failed in Math Test

4. Google has Launched a New AI Image Analyzer Tool

5. Google Launches AI Platform

Post a Comment

0 Comments