Загрузка страницы

Generative AI: Ethics, Accessibility, Legal Risk Mitigation

see more slides, notes, and other material here: https://github.com/Aggregate-Intellect/practical-llms/

https://www.linkedin.com/in/noelleai/

** Use of generative AI to improve accessibility and the lives of people with disabilities
... see the link above for more notes
** Managing and moderating LLMs to reduce bias, and increase fairness
6/20: Deliberate choices that companies make can reduce the impact of bias, and mitigate risks. Mitigating potential issues with LLMs requires a comprehensive approach, including diverse datasets, fine-tuning foundation models, utilizing hardware resources, managing content, and continuous monitoring.
7/20: LLMs are trained on data sets generated by humans, and the awareness of potential sources of bias and inaccuracies in that data is crucial. Diverse datasets are necessary when training LLMs to reduce the impact of those imperfections.
8/20: LLMs make assumptions based on training data. This can result in incorrect conclusions. Noelle mentioned that a falsely attributed scholarly article was taken as true by an LLM because it was making assumptions based on the training data.
9/20: There is a need for inclusive data collection to ensure that AI solutions represent end users. Inclusive data collection along with keeping track of lineage and context can help mitigate biases that may be present in training data.
10/20: LLMs can amplify bias over time, but there are ways to mitigate its impact. Having a human-in-the-loop process to monitor, manage and control the LLMs is crucial. #AmplificationOfBias #HumanInLoop
11/20: LLMs can speed up the process of generating content, but if not managed at scale, it can slow down the process by generating inappropriate or poor-quality content. Moderation and management of the content generated by LLMs are crucial. #ContentGeneration #Moderation
12/20: Specific performance metrics for LLMs are essential to understand what is a good model and what is not. Continuous monitoring by a human team is necessary to ensure that the model is working correctly. #PerformanceMetrics #ContinuousMonitoring
** Versatility and scalability of potential LLM use cases
13/20: LLMs are built on foundation models that can be fine-tuned to fit specific business needs. This allows businesses to create LLMs tailored to their specific needs and goals, making them increasingly popular with companies willing to invest in them.
14/20: The demand for LLMs is growing rapidly, and ML developers need to be able to build and deploy them quickly to meet the demand. This puts a premium on rapid development and deployment processes.
15/20: LLMs can do more than just conversation. They can generate natural language requests and responses from various sources, including customer signals, website data, and ticketing systems. They can even be used to automatically formulate human questions and generate Power BI dashboards.
16/20: Next generations of LLMs might be multi-modal which means that they can combine different types of input, such as images and text, to generate output. This makes them valuable in a variety of contexts and use cases.
17/20: LLMs require significant hardware resources to operate, and only a few labs in the world can support the required hardware. A balance has to be made between building internally, using offerings from mainstream vendors, and ones from emergent providers.
** Challenges and Risks of LLMs
18/20: Organizations have a responsibility to guard against potential legal issues when using LLMs. Noelle emphasized the responsibility of organizations to think through potential legal issues and approach projects with an awareness of the risks involved.
19/20: Evaluating risks and mitigating them in the solution is important. There are use cases for LLMs that are relatively easy to approach and mitigate risks, such as customer call centers and customer service ticketing. However, more rigor and discipline are required for projects using codex, which were trained on Github repos.
20/20: Indemnification is important when using LLMs to protect against ownership challenges that may arise in the future. Enterprise level solutions provide more indemnification than research models like Dalle, especially if the model is not custom trained on your own art.

Видео Generative AI: Ethics, Accessibility, Legal Risk Mitigation канала LLMs Explained - Aggregate Intellect - AI.SCIENCE
Показать
Комментарии отсутствуют
Введите заголовок:

Введите адрес ссылки:

Введите адрес видео с YouTube:

Зарегистрируйтесь или войдите с
Информация о видео
22 марта 2023 г. 16:44:53
00:45:15
Яндекс.Метрика