HR!Day723 - Al Alignment Summit Day 2: In search for an ethical and compassionate Al - Edi Pyrek, Mila Orlinska & Georg Boch

From othernetworks.org
Jump to navigation Jump to search

Day 723 Tue 6/20/23 Al Alignment Summit Day 2: In search for an ethical and compassionate Al


--- Humanity Rising Day 723 - Tuesday June 20, 2023      (GoTo Bottom)
Videos Today's HR Video Recording AfterChat Video Gallery View Speaker View Discontinued
Chats Humanity Rising Chat ChatPeople Chat
Resources Ubiquity Link for Day 723 Alt Alt2 Info: How To Access Humanity Rising List: Humanity Rising Day Pages
Title List: Searchable by Browser
List: Presenters

This Week: AI Alignment

Day1

Unpacking

Day2

Ethical

Day3

Education

Day4

Social Media

Day5

Guardrails

T

Tom Eddington HR Day 723.png
Georg Boch HR Day 723.png
HR Presenters Day 723.png

AI Alignment is a term used to describe how aligned the “processing” and behavior of an AI is to human interests. High alignment means a positive AI future, low alignment leads to a dystopian one.

Experts agree: in the near term our civilization hinges on AI alignment.

The immense difficulties with it are at the core of what moved prominent figures in AI to demand a 6 months hold on AI development.

Even the lead scientist Ilya Sutskever of OpenAI in a recent interview stated:

“I would not underestimate the difficulty of alignment of models that are actually smarter than us, of models that are capable of misrepresenting their intentions. It's something to think about a lot and do research.

Oftentimes academic researchers ask me what’s the best place where they can contribute. And alignment research is one place where academic researchers can make very meaningful contributions.

"A mathematical definition (of alignment) is unlikely. Rather than achieving one mathematical definition, I think we will achieve multiple definitions that look at alignment from different aspects. And that this is how we will get the assurance that we want. “

The goal of our interdisciplinary series is to help explore these multiple definitions.

We explore the computer science term AI alignment via novel perspectives from evolutionary biology, sociology, philosophy as well as insights from the wisdom tradition.

DAY TWO

AI Alignment: In search for an ethical and compassionate AI

How can we approach the idea of alignment of complex system like LLMs?

Panelist:

Edi Pyrek, Founder, Global Artificial Intelligence Association

Writer, journalist, speaker, University lecturer, and advisor to three Prime Ministers, Edi Pyrek has worked in Afghanistan as a peace negotiator, business mentor of leading corporations and institutions. He has made films for Discovery and written books for National Geographic. As part of his work on solving global problems, he started to create an “ontological spiral” to understand the teaching on the morality of Artificial Intelligence.

Mila Orlinska is the CTO and Founder of iMind Institute. She specializes in neuromodulation, mindfulness, breathing techniques and insight methods-working with emotions. In her courses and training she draws on her many years of experience in meditation, self-development and in her work as an IT specialist.

Moderators:

Georg Boch

Tom Eddington

46 Participants

---

To make a voluntary contribution to support the partner organizations and the Humanity Rising team, please see our contribution form.

Each Zoom live webinar will have a maximum capacity of 500 participants. If you are not able to join on Zoom, we will be live streaming here on the UbiVerse and on:

UU YouTube: https://www.youtube.com/c/UbiquityUniversity

GoTo Top


GoTo Top


Categories

GoTo Top