HR!Day724 - Al Alignment Summit Day 3: Aligned Al use-cases in Education - Lisa Miller, David Lorimer, Natalie Zeituny, Cassandra Vieten, Tomas Ford, Ryan Suspanic & Johnny Rose

From othernetworks.org
Jump to navigation Jump to search


HR Presenters Day 724.png
--- Humanity Rising Day 724 - Wednesday June 21, 2023      (GoTo Bottom)
Videos Today's HR Video Recording AfterChat Video Gallery View

Natalie on Ensoulment at 16:40

Speakeep View
Chats Humanity Rising Chat Leo's Links and recommendations shared HR June 21, 2023 ChatPeople Chat

Natalie Zeituny Joined us today

Resources Ubiquity Link for Day 724 Alt Alt2 Info: How To Access Humanity Rising List: Humanity Rising Day Pages
Title List: Searchable by Browser
List: Presenters

This Week: AI Alignment

Day1

Unpacking

Day2

Ethical

Day3

Education

Day4

Social Media

Day5

Guardrails

T

AI Alignment is a term used to describe how aligned the “processing” and behavior of an AI is to human interests. High alignment means a positive AI future, low alignment leads to a dystopian one.

Experts agree: in the near term, our civilization hinges on AI alignment.

The immense difficulties with it are at the core of what moved prominent figures in AI to demand a 6 months hold on AI development.

Even the lead scientist Ilya Sutskever of OpenAI in a recent interview stated:

“I would not underestimate the difficulty of alignment of models that are actually smarter than us, of models that are capable of misrepresenting their intentions. It's something to think about a lot and do research.

Oftentimes academic researchers ask me what’s the best place where they can contribute. And alignment research is one place where academic researchers can make very meaningful contributions.

"A mathematical definition (of alignment) is unlikely. Rather than achieving one mathematical definition, I think we will achieve multiple definitions that look at alignment from different aspects. And that this is how we will get the assurance that we want. “

The goal of our interdisciplinary series is to help explore these multiple definitions.

We explore the computer science term AI alignment via novel perspectives from evolutionary biology, sociology, philosophy as well as insights from the wisdom tradition.

DAY THREE:

Aligned AI use-cases in Education

What would applications of aligned AI look like? What would they optimize for?

What do examples of unaligned AI integration look like?

Panelists:

David Lorimer, Ph.D.

Scientific and Medical Network

https://en-academic.com/dic.nsf/enwiki/1222612::

Natalie Zeituny, Ph.D.

https://nataliezeituny.com/::

Cassandra Vieten, Ph. D.

Clinical Professor, Family Medicine and Public Health University of California San Diego

Tomas Fryman, Ph. D. is a fourth-year doctoral candidate in Psychology and Spirituality at Teacher’s College, Columbia University. His research interests center on the study of consciousness and self transcendence.

Ryan Suspanic

Johnny Rose

Moderators

Dr. Lisa Miller

Georg Boch

Tom Eddington

45 Participants

---

To make a voluntary contribution to support the partner organizations and the Humanity Rising team, please see our contribution form.

Each Zoom live webinar will have a maximum capacity of 500 participants. If you are not able to join on Zoom, we will be live streaming here on the UbiVerse and on:

UU YouTube: ::https://www.youtube.com/c/UbiquityUniversity::

GoTo Top


GoTo Top


Categories

GoTo Top