Página inicialGruposDiscussãoMaisZeitgeist
Pesquise No Site
Este site usa cookies para fornecer nossos serviços, melhorar o desempenho, para análises e (se não estiver conectado) para publicidade. Ao usar o LibraryThing, você reconhece que leu e entendeu nossos Termos de Serviço e Política de Privacidade . Seu uso do site e dos serviços está sujeito a essas políticas e termos.

Resultados do Google Livros

Clique em uma foto para ir ao Google Livros

The Alignment Problem: Machine Learning and…
Carregando...

The Alignment Problem: Machine Learning and Human Values (original: 2020; edição: 2020)

de Brian Christian (Autor)

MembrosResenhasPopularidadeAvaliação médiaMenções
2053131,975 (4.26)2
"A jaw-dropping exploration of everything that goes wrong when we build AI systems-and the movement to fix them. Today's "machine-learning" systems, trained by data, are so effective that we've invited them to see and hear for us-and to make decisions on our behalf. But alarm bells are ringing. Systems cull résumés until, years later, we discover that they have inherent gender biases. Algorithms decide bail and parole-and appear to assess black and white defendants differently. We can no longer assume that our mortgage application, or even our medical tests, will be seen by human eyes. And autonomous vehicles on our streets can injure or kill. When systems we attempt to teach will not, in the end, do what we want or what we expect, ethical and potentially existential risks emerge. Researchers call this the alignment problem. In best-selling author Brian Christian's riveting account, we meet the alignment problem's "first-responders," and learn their ambitious plan to solve it before our hands are completely off the wheel"--… (mais)
Membro:biocentric_666
Título:The Alignment Problem: Machine Learning and Human Values
Autores:Brian Christian (Autor)
Informação:W. W. Norton & Company (2020), Edition: 1, 496 pages
Coleções:Lista de desejos
Avaliação:
Etiquetas:Nenhum(a)

Informações da Obra

The Alignment Problem: Machine Learning and Human Values de Brian Christian (2020)

Carregando...

Registre-se no LibraryThing tpara descobrir se gostará deste livro.

Ainda não há conversas na Discussão sobre este livro.

» Veja também 2 menções

Exibindo 3 de 3
There was lump in my throat when Deep Mind’s AlphaGo crushed Lee Sedol at Go, the oldest (3000 year old) arguably most complex strategic board game , cause with that AI not just defeat the greatest player ever but effectively wiped any future association of GO and Humans . No Human will even beat AI at GO again period, that fortress is breached! We have essentially been relegated to a mere factoid in the timeline of this planet.

While capitalism will ensure the inevitability that humans will be pushed “out of the loop” in every aspect – The question is not if but when . Brian Christian’s Alignment Problem educates the reader with the real pitfalls of depending on algorithms and inherent drawbacks of machine learning . Brian more than Nick Bostrom’s – Super Intelligence in my opinion dwells much deeper on the alignment problem at hand ; Bostrom set the stage for AI safety and was labelled as an alarmist ; well not anymore .

From dopamine exploiting social media algorithms to parole sentences to mortgage application approvals ; these highly pervasive machine learning algos now control various aspects of humans , while Congress grapples with legislation & red-tape .

The book gives an over arching view on how the ML algos came about around the following “pillars” curiosity, imitation, reinforcement, model bias , bad data samples etc. and why it is crucial to align AI goals with Human values .

And as often is the case the problems are more of the philosophical nature than anything , this also highlights the importance of psychology , social anthropology , neurophysiology and psychoanalysis playing a quintessential part in future development of this nascent field .The latter part of the book deals with possibly the tougher questions which AI posses ; happy to see the Effective Altruism movement founder Will MacAskell get a page in there too . ( )
  Vik.Ram | Aug 12, 2022 |
An impressive, conversation-based analysis of how AI systems developed through processes of machine learning (ML) might be constrained to be both safe and ethical. I had little idea of how rich and massive the research on this has been. In nine chapters with carefully chosen one-word headings (Representation, Fairness, Transparency, Reinforcement, Shaping, Curiosity, Imitation, Inference, and Uncertainty), the author describes a sequence of diverse and increasingly sophisticated ML concepts, culminating in what is called Cooperative Inverse Reinforcement Learning (CIRL). Whether AI will ever stop being part of what I regard as the wrongness of modern technology, I don't know, but at least there are people in the field who have their hearts in the right place.
  fpagan | Mar 21, 2022 |
There is a great book trapped inside this good book, waiting for a skillful editor to carve it out. The author did vast research in multiple domains and it seems like he could neither build a cohesive narration that could connect all of it nor leave anything out.

This book is probably the best intro to machine learning space for a non-engineer I've read. It presents its history, challenges, what can be done, and what can't be done (yet). It's both accessible and substantive, presenting complex ideas in a digestible form without dumbing them down. If you want to spark the ML interest in anyone who hasn't been paying attention to this field, give them this book. It provides a wide background connecting ML to neuroscience, cognitive science, psychology, ethics, and behavioral economics that will blow their mind.

It's also very detailed, screaming at the reader "I did the research, I went where no one else dared to go!". It will not only present you with an intriguing ML concept but also: trace its roots to XIX century farming problem or biology breakthrough, present all the scientist contributing to this research, explain how they met and got along, cite author's interviews with some of them, and present their life after they published their masterpiece, including completely unrelated information about their substance abuse and dark circumstances of their premature death. It's written quite well, so there might be an audience who enjoys this, but sadly I'm not a part of it.

If this book was structured to touch directly the subject of the alignment problem it would be at least 3 times shorter. It doesn't mean that 2/3 are bad - most of it is informative, some of it is entertaining, a lot seems like ML things that the author found interesting and just added to the book without any specific connection to its premise. I really liked the first few chapters where machine learning algorithms are presented as the first viable benchmark to the human thinking process and mental models that we build. Spoiler alert: it very clearly shows our flaws, biases, and lies that we tell ourselves (that are further embedded in ML models that we create and technology that uses them).

Overall, I enjoyed most of this book. I just feel a bit cheated by its title and premise, which advertise a different kind of book. This is the Machine Learning omnibus, presenting the most interesting scientific concepts of this field and the scientists behind them. If this is what you expect and need, you won't be disappointed! ( )
  sperzdechly | Mar 18, 2021 |
Exibindo 3 de 3
The Alignment Problem does an outstanding job of explaining insights and progress from recent technical AI/ML literature for a general audience. For risk analysts, it provides both a fascinating exploration of foundational issues about how data analysis and algorithms can best be used to serve human needs and goals and also a perceptive examination of how they can fail to do so.
adicionado por Edward | editarRisk Analysis, Louis Anthony Cox Jr. (Web site pago) (Mar 3, 2023)
 
Você deve entrar para editar os dados de Conhecimento Comum.
Para mais ajuda veja a página de ajuda do Conhecimento Compartilhado.
Título canônico
Título original
Títulos alternativos
Data da publicação original
Pessoas/Personagens
Lugares importantes
Eventos importantes
Filmes relacionados
Epígrafe
Dedicatória
Primeiras palavras
Citações
Últimas palavras
Aviso de desambiguação
Editores da Publicação
Autores Resenhistas (normalmente na contracapa do livro)
Idioma original
CDD/MDS canônico
LCC Canônico

Referências a esta obra em recursos externos.

Wikipédia em inglês

Nenhum(a)

"A jaw-dropping exploration of everything that goes wrong when we build AI systems-and the movement to fix them. Today's "machine-learning" systems, trained by data, are so effective that we've invited them to see and hear for us-and to make decisions on our behalf. But alarm bells are ringing. Systems cull résumés until, years later, we discover that they have inherent gender biases. Algorithms decide bail and parole-and appear to assess black and white defendants differently. We can no longer assume that our mortgage application, or even our medical tests, will be seen by human eyes. And autonomous vehicles on our streets can injure or kill. When systems we attempt to teach will not, in the end, do what we want or what we expect, ethical and potentially existential risks emerge. Researchers call this the alignment problem. In best-selling author Brian Christian's riveting account, we meet the alignment problem's "first-responders," and learn their ambitious plan to solve it before our hands are completely off the wheel"--

Não foram encontradas descrições de bibliotecas.

Descrição do livro
Resumo em haiku

Current Discussions

Nenhum(a)

Capas populares

Links rápidos

Avaliação

Média: (4.26)
0.5
1
1.5
2
2.5
3 2
3.5 2
4 11
4.5 1
5 9

É você?

Torne-se um autor do LibraryThing.

 

Sobre | Contato | LibraryThing.com | Privacidade/Termos | Ajuda/Perguntas Frequentes | Blog | Loja | APIs | TinyCat | Bibliotecas Históricas | Os primeiros revisores | Conhecimento Comum | 204,493,439 livros! | Barra superior: Sempre visível