目前編程應(yīng)用得最好,醫(yī)療保健和教育是另外兩個(gè)快速發(fā)展的領(lǐng)域。

蓋茨對(duì)話奧爾特曼:未來(lái)兩年多模態(tài)、可定制化和個(gè)性化將非常重要

2024-01-16 10:45:07發(fā)布     來(lái)源:比爾蓋茨    作者:比爾·蓋茨  

  來(lái)源:比爾蓋茨官方公眾號(hào)

  編者按:在蓋茨的播客中,問(wèn)及了大模型的發(fā)展及應(yīng)用。奧爾特曼認(rèn)為,未來(lái)兩年,多模態(tài)、可定制化和個(gè)性化非常重要。對(duì)于AI提高了生產(chǎn)力,奧爾特曼提到,編程是目前最為興奮也是應(yīng)用最廣泛的領(lǐng)域。此外,醫(yī)療保健和教育是另外兩個(gè)期望快速發(fā)展的領(lǐng)域。

  如果讓人們列舉人工智能領(lǐng)域的領(lǐng)軍人物,有一個(gè)名字你可能會(huì)聽(tīng)得最多:薩姆·奧爾特曼(Sam Altman)。他在OpenAI的團(tuán)隊(duì)正在用ChatGPT挑戰(zhàn)人工智能的極限,我很高興能和他談?wù)勏乱徊降挠?jì)劃。我們的談話涵蓋了為什么今天的人工智能模型是最愚蠢的,社會(huì)將如何適應(yīng)技術(shù)變革,甚至當(dāng)我們完善了人工智能之后,人類將在哪里找到目標(biāo)。

  比爾·蓋茨:我今天的嘉賓是薩姆·奧爾特曼。當(dāng)然,他是OpenAI的首席執(zhí)行官。長(zhǎng)期以來(lái),他一直是科技行業(yè)的創(chuàng)業(yè)者和領(lǐng)導(dǎo)者,包括經(jīng)營(yíng)Y Combinator,這家公司做了很多了不起的事情,比如資助Reddit、Dropbox、Airbnb。

  在我錄制本期節(jié)目不久之后,他被解除了OpenAI首席執(zhí)行官的職務(wù),這完全驚到了我,至少是短暫的驚訝。解雇后的幾天里發(fā)生了很多事情,包括幾乎所有OpenAI員工聯(lián)名支持薩姆回歸,而現(xiàn)在,薩姆又回來(lái)了。所以,在你聽(tīng)到我們的對(duì)話之前,讓我們先來(lái)了解一下薩姆,看看他現(xiàn)在過(guò)得怎么樣。

  比爾·蓋茨嘿,薩姆。

  薩姆·奧爾特曼嘿,比爾。

  比爾·蓋茨你好嗎?

  薩姆·奧爾特曼哦,天哪。這真的太瘋狂了,我還好。這是一個(gè)非常激動(dòng)人心的時(shí)期。

  比爾·蓋茨團(tuán)隊(duì)情況怎么樣?

  薩姆·奧爾特曼我想,你知道很多人都注意到了這樣一個(gè)事實(shí),那就是團(tuán)隊(duì)從未如此高效、樂(lè)觀、出色。所以,我猜這也正是藏在所有事情背后的一線希望。

  在某種意義上,這是我們成長(zhǎng)的真正時(shí)刻,我們非常有動(dòng)力變得更好,變成一個(gè)為我們所面臨的挑戰(zhàn)做好準(zhǔn)備的公司。

  比爾·蓋茨太棒了。

  所以,我們?cè)趯?duì)話中不會(huì)討論那件事;然而,你會(huì)聽(tīng)到薩姆致力于建立一個(gè)安全和負(fù)責(zé)任的人工智能的承諾。我希望你喜歡這次對(duì)話。

  歡迎來(lái)到《為自己解惑》。我是比爾·蓋茨。

  比爾·蓋茨今天我們將主要關(guān)注人工智能,因?yàn)樗绱肆钊伺d奮,人們同時(shí)也對(duì)它感到擔(dān)憂。歡迎,薩姆。

  薩姆·奧爾特曼非常感謝你邀請(qǐng)我來(lái)參加節(jié)目。

  比爾·蓋茨我有幸見(jiàn)證了你們工作的進(jìn)展,但開(kāi)始的時(shí)候我是非常懷疑的,我也沒(méi)期待過(guò)ChatGPT能做得這么好。它讓我十分驚訝,我們實(shí)際上并不懂這種編碼方式。我們知道數(shù)字,我們可以看到它相乘,但如何把莎士比亞的作品編碼?你認(rèn)為我們能對(duì)這種表示有更深的理解嗎?

  薩姆·奧爾特曼百分之百可以。要在人腦中做到這一點(diǎn)非常難,你可以說(shuō)這是一個(gè)類似的問(wèn)題,就是有這些神經(jīng)元,它們彼此相連。但它們的連接在變化,我們不可能切開(kāi)你的大腦來(lái)觀察它是如何進(jìn)化的,但我們可以完美地透視。目前在可解釋性方面已經(jīng)有一些非常好的工作,而且我認(rèn)為隨著時(shí)間的推移會(huì)有更多的解釋出現(xiàn)。我認(rèn)為我們將能夠理解這些網(wǎng)絡(luò),但我們目前的理解能力還很低。而正如你所樂(lè)見(jiàn)的,我們僅了解的那一點(diǎn)點(diǎn)已經(jīng)對(duì)改進(jìn)這些東西非常有幫助。撇開(kāi)科學(xué)好奇心不談,我們都有動(dòng)力去真正了解它們,盡管它們的規(guī)模是如此龐大。我們還可以說(shuō),莎士比亞(的作品)在你大腦的哪個(gè)位置編碼的,又是如何表現(xiàn)的?

  比爾·蓋茨我們不知道。

  薩姆·奧爾特曼我們確實(shí)不知道,甚至可以說(shuō)在這些我們本應(yīng)能夠完美透視、觀察并進(jìn)行任何測(cè)試的大量數(shù)字中我們還是找不到答案,這就更讓人缺少滿足感。

  比爾·蓋茨我非常確信,在接下來(lái)的五年內(nèi),我們會(huì)理解它。就訓(xùn)練效率和準(zhǔn)確性而言,這種理解將讓我們做得比今天能做的好得多。

  薩姆·奧爾特曼百分之百同意。你會(huì)在許多經(jīng)驗(yàn)性發(fā)現(xiàn)的技術(shù)發(fā)展史中看到這一點(diǎn)。他們雖然不知道發(fā)生了什么,但顯然它行得通。然后,隨著科學(xué)理解的加深,他們可以使它變得更好。

  比爾·蓋茨是的,在物理學(xué)、生物學(xué)中,有時(shí)只是隨便一通亂試,然后就“哇”的一聲——這究竟是怎么實(shí)現(xiàn)的?

  薩姆·奧爾特曼在我們的案例中,構(gòu)建GPT-1的那個(gè)人自己解決了這個(gè)“哇”的問(wèn)題,這有些令人印象深刻,但并沒(méi)有深入理解它是如何工作的,以及為什么它是有效的。然后我們有了拓展規(guī)律,可以預(yù)測(cè)它會(huì)變得多好。這就是為什么當(dāng)我們告訴你可以做一個(gè)演示時(shí),我們相當(dāng)有信心它會(huì)成功。我們還沒(méi)有訓(xùn)練模型,但我們很有信心。這讓我們做了大量嘗試,對(duì)正在發(fā)生的事情有了越來(lái)越科學(xué)的認(rèn)識(shí)。但這確實(shí)源于經(jīng)驗(yàn)結(jié)果先行。

  比爾·蓋茨當(dāng)你展望未來(lái)兩年,你認(rèn)為會(huì)有哪些重要的里程碑?

  薩姆·奧爾特曼多模態(tài)肯定會(huì)很重要。

  比爾·蓋茨你指的是語(yǔ)音輸入、語(yǔ)音輸出?

  薩姆·奧爾特曼語(yǔ)音輸入、語(yǔ)音輸出,然后是圖像,最終是視頻。顯然,人們真的需要這些。我們已經(jīng)推出了圖像和音頻,反響比我們的預(yù)期要強(qiáng)烈得多。我們能夠?qū)⑵渫七M(jìn)得更遠(yuǎn),但也許最重要的進(jìn)步領(lǐng)域?qū)@推理能力展開(kāi)?,F(xiàn)在,GPT-4的推理能力還非常有限。還有可靠性,如果你問(wèn)GPT-4大部分問(wèn)題10000次,這10000次中可能有一次回答得很好,但它不一定知道是哪一次。而你卻希望每次都能得到這10000次中最好的回答,因此可靠性的提升將非常重要。

  可定制性和個(gè)性化也將非常重要。人們對(duì)GPT-4的需求各不相同:不同的風(fēng)格,不同的假設(shè)集,我們將使所有這些成為可能,然后還能讓它使用你自己的數(shù)據(jù)。它能夠了解你、你的電子郵件、你的日歷、你喜歡的預(yù)約方式,并與其他外部數(shù)據(jù)源連接,所有這些都將是最重要的改進(jìn)領(lǐng)域。

  比爾·蓋茨在目前的基礎(chǔ)算法中,它只是在做簡(jiǎn)單的前饋、乘法,所以為了生成每一個(gè)新詞,它本質(zhì)上都在做同樣的事情。我會(huì)很感興趣的是,你們能夠像解決復(fù)雜的數(shù)學(xué)方程式那樣,可能需要任意次數(shù)的應(yīng)用變換,那么用于推理的控制邏輯可能需要比我們今天所做的復(fù)雜得多。

  薩姆·奧爾特曼至少,我們似乎需要某種形式的自適應(yīng)計(jì)算?,F(xiàn)在,我們?cè)诿總€(gè)標(biāo)記上都花費(fèi)同樣多的計(jì)算資源,不管它是一個(gè)簡(jiǎn)單的標(biāo)記,還是解決一些復(fù)雜的數(shù)學(xué)問(wèn)題。

  比爾·蓋茨是的,比如說(shuō),“解決黎曼假設(shè)……”

  薩姆·奧爾特曼那需要大量的計(jì)算。

  比爾·蓋茨但它用的計(jì)算資源跟說(shuō)個(gè)“The”一樣。

  薩姆·奧爾特曼對(duì),我們至少得讓它能用。我們可能還需要在它之上更復(fù)雜的東西。

  比爾·蓋茨你和我都參加過(guò)一個(gè)參議院的教育會(huì)議,我很高興有大約30名參議員參加了那次會(huì)議,并幫助他們快速跟上進(jìn)展,因?yàn)檫@是一個(gè)重大的變革推動(dòng)者。我不認(rèn)為我們?yōu)榱宋鸵呀?jīng)做的過(guò)多。然而,當(dāng)他們說(shuō),“哦,我們?cè)谏缃幻襟w上搞砸了,我們應(yīng)該做得更好”——這是一個(gè)巨大的挑戰(zhàn),在兩極分化方面有非常負(fù)面的因素。即使是現(xiàn)在,我也不確定我們?cè)撊绾螒?yīng)對(duì)。

  薩姆·奧爾特曼我不明白為什么政府在社交媒體方面不能更有效,但這似乎值得作為一個(gè)研究案例去理解,因?yàn)樗麄儸F(xiàn)在將要面臨的是與AI相關(guān)的挑戰(zhàn)。

  比爾·蓋茨這是一個(gè)很好的研究案例,那么當(dāng)你談?wù)摫O(jiān)管時(shí),你是否清楚該構(gòu)建哪種類型的監(jiān)管?

  薩姆·奧爾特曼我認(rèn)為我們開(kāi)始弄清楚了。在這個(gè)領(lǐng)域進(jìn)行過(guò)度監(jiān)管是非常容易的,你也可以看到過(guò)去許多此類事情的發(fā)生。但同樣的,如果我們是對(duì)的,可能結(jié)果卻顯示我們錯(cuò)了,但如果在最后我們是對(duì)的,這項(xiàng)技術(shù)發(fā)展到我們認(rèn)為它會(huì)達(dá)到的程度,它將影響社會(huì),影響地緣政治力量的平衡,以及其他許多事物。對(duì)于這些仍然是假設(shè)性的,但未來(lái)極其強(qiáng)大的系統(tǒng)——不是說(shuō)GPT-4,而是針對(duì)計(jì)算能力是其10萬(wàn)倍或100萬(wàn)倍的系統(tǒng),我們已經(jīng)接受了一個(gè)全球監(jiān)管機(jī)構(gòu)的想法,這個(gè)機(jī)構(gòu)將緊盯這些超級(jí)強(qiáng)大的系統(tǒng),因?yàn)樗鼈兇_實(shí)會(huì)產(chǎn)生如此大的全球影響。我們談到的一個(gè)模式就是類似國(guó)際原子能機(jī)構(gòu)的模式。對(duì)于核能,我們的決定也是如此。由于其潛在的全球影響,它需要一個(gè)全球性的機(jī)構(gòu),我認(rèn)為這是合理的。會(huì)有很多短期問(wèn)題,比如這些模式可以說(shuō)什么,不可以說(shuō)什么?我們?nèi)绾慰创鏅?quán)問(wèn)題?不同的國(guó)家會(huì)有不同的考慮,這沒(méi)問(wèn)題。

  比爾·蓋茨有些人認(rèn)為,如果一些模型非常強(qiáng)大,我們就會(huì)對(duì)它們感到害怕——全球核監(jiān)管之所以行之有效,基本上是因?yàn)橹辽僭诿裼梅矫?,每個(gè)人都希望共享安全實(shí)踐,而且這一點(diǎn)做得非常好。當(dāng)你涉及核武器方面時(shí),就沒(méi)有這種情況了。如果關(guān)鍵在于阻止整個(gè)世界做危險(xiǎn)的事情,你會(huì)希望有一個(gè)全球政府,但今天對(duì)于許多問(wèn)題,如氣候問(wèn)題、恐怖主義,可以看到我們很難合作。人們甚至援引中美競(jìng)爭(zhēng)來(lái)解釋為什么任何放緩的想法都是不恰當(dāng)?shù)摹ky道任何放慢腳步的想法,或者說(shuō)放慢腳步到足夠謹(jǐn)慎的程度,都難以實(shí)施嗎?

  薩姆·奧爾特曼是的,我認(rèn)為要求其放慢速度是非常困難的。如果改成“做你想做的事,但任何計(jì)算集群都不能超過(guò)一個(gè)特定的、極高的功率門檻”——鑒于這里的成本,我們可能只會(huì)看到五個(gè)這樣的集群——像這樣的任何集群都必須接受類似國(guó)際武器檢查員的審查。那里的模型必須接受安全審計(jì),通過(guò)訓(xùn)練期間的一些測(cè)試,并在部署前通過(guò)審計(jì)和測(cè)試。對(duì)我來(lái)說(shuō),這似乎是可能的。我之前不太確定,但今年我進(jìn)行了一次環(huán)球之旅,與需要參與這一計(jì)劃的許多國(guó)家的元首進(jìn)行了交談,他們幾乎都表示了支持。這不會(huì)讓我們免于所有事情,仍然會(huì)有一些問(wèn)題出現(xiàn)在規(guī)模較小的系統(tǒng)上,有些情況可能會(huì)出現(xiàn)相當(dāng)嚴(yán)重的錯(cuò)誤,但我認(rèn)為這可以幫助我們應(yīng)對(duì)最高層面的風(fēng)險(xiǎn)。

  比爾·蓋茨我確實(shí)認(rèn)為,在最好的情況下,人工智能可以幫助我們解決一些難題。

  薩姆·奧爾特曼當(dāng)然可以。

  比爾·蓋茨包括兩極分化的問(wèn)題,因?yàn)樗赡軙?huì)破壞民主,而那將是一個(gè)極其糟糕的事情?,F(xiàn)在,我們看到人工智能帶來(lái)了很多生產(chǎn)力的提升,這是非常好的事情。你最興奮的領(lǐng)域是哪些?

  薩姆·奧爾特曼首先,我始終認(rèn)為值得記住的是,我們正處在這一長(zhǎng)期、連續(xù)的曲線上。現(xiàn)在,我們有能夠完成任務(wù)的人工智能系統(tǒng)。它們當(dāng)然不能完成一個(gè)完整的工作(崗位所做的事情),但它們可以做些任務(wù),并且在那里有生產(chǎn)力的提升。最終,它們將能夠做更多類似今天人類工作的事情,我們?nèi)祟惍?dāng)然也會(huì)找到新的、更好的工作。我完全相信,如果你給人們更強(qiáng)大的工具,他們不僅僅可以工作得更快,還可以做一些本質(zhì)上不同的事情。現(xiàn)在,我們或許可以將程序員的工作速度提高三倍。這就是我們所看到的,也是我們最興奮的領(lǐng)域之一,它運(yùn)行得非常好。但是,如果你能讓程序員的效率提高三倍,那就不僅僅是他們能做的事情多了三倍,而是他們能在更高的抽象層次上、使用更多的腦力去思考完全不同的事情。這就好比從打孔卡到更高級(jí)的語(yǔ)言,不僅僅是讓我們的編程速度快了一點(diǎn),而是讓我們得到了質(zhì)的提升。我們確實(shí)看到了這一點(diǎn)。

  當(dāng)我們看向這些能夠完成更完整任務(wù)的下一代人工智能時(shí),你可以將它想象成一個(gè)小代理,你可以對(duì)它說(shuō):“幫我寫(xiě)這整個(gè)程序,我會(huì)在過(guò)程中問(wèn)你幾個(gè)問(wèn)題,但它不僅僅是一次只寫(xiě)幾個(gè)函數(shù)”,這樣就會(huì)有很多新生事物出現(xiàn)。然后,它還能做更復(fù)雜的事情。有一天,也許會(huì)有一個(gè)人工智能,你可以對(duì)它說(shuō):“幫我建立并運(yùn)營(yíng)這家公司”。然后有一天,也許會(huì)有一個(gè)人工智能,你可以對(duì)它說(shuō):“去發(fā)現(xiàn)新的物理學(xué)”。我們現(xiàn)在看到的東西既令人興奮又美妙,但我認(rèn)為把它放在這項(xiàng)技術(shù)的背景下是值得的,至少在未來(lái)的五年或十年內(nèi),這項(xiàng)技術(shù)將處于一個(gè)非常陡峭的成長(zhǎng)曲線上?,F(xiàn)有這些模型都將變成最愚蠢的模型。

  編程可能是我們今天感到最興奮的一個(gè)提高生產(chǎn)力的領(lǐng)域。目前,它已經(jīng)被大規(guī)模部署和使用。醫(yī)療保健和教育也是另外兩個(gè)我們非常期待的快速發(fā)展的領(lǐng)域。

  比爾·蓋茨:有點(diǎn)令人生畏的是,與以往的技術(shù)改進(jìn)不同,這項(xiàng)技術(shù)的改進(jìn)速度非???,而且沒(méi)有上限。它可以在很多工作領(lǐng)域達(dá)到人類的水平,即使做不出獨(dú)特的科學(xué)研究,它也可以打客服電話和銷售電話。我想你和我確實(shí)有一些擔(dān)憂,盡管這是一件好事,但它將迫使我們比以往任何時(shí)候都要更快地適應(yīng)。

  薩姆·奧爾特曼:這才是可怕的地方。這并不是說(shuō)我們必須適應(yīng),并不是說(shuō)人類沒(méi)有超強(qiáng)的適應(yīng)能力。我們已經(jīng)經(jīng)歷過(guò)這些大規(guī)模的技術(shù)變革,人們所從事的大量工作可能在幾代人的時(shí)間里發(fā)生變化,而在幾代人的時(shí)間里,我們似乎可以很好地吸收這些變化。在過(guò)去那些偉大的技術(shù)革命中,我們已經(jīng)看到了這一點(diǎn)。每一次技術(shù)革命都會(huì)變得更快,而這次將是迄今為止最快的一次。這就是我覺(jué)得有點(diǎn)可怕的地方,我們的社會(huì)需要以何種速度去適應(yīng)它的發(fā)展,以及勞動(dòng)力市場(chǎng)將發(fā)生的變化。

  比爾·蓋茨人工智能的一個(gè)方面是機(jī)器人技術(shù)(學(xué)),或者說(shuō)藍(lán)領(lǐng)工作,當(dāng)你得到具有人類水平能力的手和腳時(shí)。ChatGPT令人難以置信的突破是讓我們開(kāi)始關(guān)注白領(lǐng)工作,這完全沒(méi)問(wèn)題,但我擔(dān)心人們會(huì)失去對(duì)藍(lán)領(lǐng)工作的關(guān)注。你如何看待機(jī)器人技術(shù)?

  薩姆·奧爾特曼:我對(duì)此非常興奮。我們太早開(kāi)始研究機(jī)器人了,所以不得不擱置那個(gè)項(xiàng)目。它也因?yàn)殄e(cuò)誤的原因而變得困難,無(wú)助于我們?cè)跈C(jī)器學(xué)習(xí)研究的困難部分取得進(jìn)展。我們一直在處理糟糕的模擬器和肌腱斷裂之類的問(wèn)題。隨著時(shí)間的推移,我們也越來(lái)越意識(shí)到,我們首先需要的是智能和認(rèn)知,然后才能想辦法讓它適應(yīng)物理特性。從我們構(gòu)建這些語(yǔ)言模型的方式來(lái)看,從那開(kāi)始更容易。但我們一直計(jì)劃回到這個(gè)問(wèn)題上來(lái)。

  我們已經(jīng)開(kāi)始對(duì)一些機(jī)器人公司進(jìn)行投資。在物理硬件方面,我終于第一次看到了真正令人興奮的新平臺(tái)被建立起來(lái)。到時(shí)候,我們就能利用我們的模型,就像你剛才說(shuō)的,利用它們的語(yǔ)言理解能力和未來(lái)的視頻理解能力,說(shuō):“好吧,讓我們用機(jī)器人做一些了不起的事情吧。”

  比爾·蓋茨:如果那些已經(jīng)把腿部做得很好的硬件人員真的把手臂、手掌和手指做出來(lái),然后我們?cè)侔阉鼈兘M合起來(lái),而且價(jià)格也不會(huì)貴得離譜,那么這將會(huì)迅速改變很多藍(lán)領(lǐng)類工作的就業(yè)市場(chǎng)。

  薩姆·奧爾特曼:是的。當(dāng)然,如果我們回溯七到十年,共識(shí)性的預(yù)測(cè)是其影響的首先是藍(lán)領(lǐng)工作,其次是白領(lǐng)工作,創(chuàng)造力可能永遠(yuǎn)不會(huì),起碼是最后一個(gè),因?yàn)槟鞘悄Хê腿祟惖膹?qiáng)項(xiàng)。

  顯然,現(xiàn)在的情況正好相反。我認(rèn)為這其中有很多有趣的原因能夠解釋它為什么會(huì)發(fā)生。創(chuàng)造性工作,GPT模型的幻覺(jué)是一個(gè)特性,而不是缺陷,它能讓你發(fā)現(xiàn)一些新事物。而如果你要讓機(jī)器人移動(dòng)重型機(jī)械,你最好能做到非常精確。我認(rèn)為這只是一個(gè)你必須跟隨技術(shù)發(fā)展的案例。你可能有一些先入為主的觀念,但有時(shí)科學(xué)并不往那個(gè)方向發(fā)展。

  比爾·蓋茨:那么你手機(jī)上最常用的應(yīng)用是什么?

  薩姆·奧爾特曼:Slack。

  比爾·蓋茨:真的嗎?

  薩姆·奧爾特曼:是的,我希望我能說(shuō)是ChatGPT。

  比爾·蓋茨:【笑】甚至比電子郵件還多?

  薩姆·奧爾特曼:遠(yuǎn)遠(yuǎn)超過(guò)電子郵件。我認(rèn)為唯一可能超過(guò)它的是iMessages,但確實(shí)Slack比iMessages還多。

  比爾·蓋茨:在OpenAI內(nèi)部,有很多協(xié)調(diào)工作要做。

  薩姆·奧爾特曼:是的,那你呢?

  比爾·蓋茨:我是Outlook。我是傳統(tǒng)的電子郵件派,要么就是瀏覽器,因?yàn)?,?dāng)然,我的許多新聞都是通過(guò)瀏覽器看來(lái)的。

  薩姆·奧爾特曼:我沒(méi)有把瀏覽器算作一個(gè)應(yīng)用,有可能我使用它的頻率更高,但我仍然打賭是Slack,我整天都在使用它。

  比爾·蓋茨:不可思議。

  比爾·蓋茨:好吧,我們這里有一個(gè)黑膠唱片機(jī)。我像對(duì)其他嘉賓那樣,要求薩姆帶來(lái)一張他最喜歡的唱片。那么,你今天帶來(lái)了什么?

  薩姆·奧爾特曼:我?guī)?lái)了馬克斯·里希特重新編曲的維瓦爾第的《新四季》。我工作時(shí)喜歡無(wú)歌詞的音樂(lè),這張唱片既保留了維瓦爾第原作的舒適感,也有我非常熟悉的曲子,但又有足夠多新的音符帶來(lái)完全不同的體驗(yàn)。有些音樂(lè)作品,你會(huì)因?yàn)樵谌松年P(guān)鍵時(shí)期大量地聽(tīng)它們而形成強(qiáng)烈的情感依戀,而《新四季》正是我在我們初創(chuàng)OpenAI時(shí)經(jīng)常聽(tīng)的東西。

  我認(rèn)為這是非常美妙的音樂(lè),它高亢而樂(lè)觀,完美適配我工作時(shí)的需求,我覺(jué)得新版本非常棒。

  比爾·蓋茨:這是由交響樂(lè)團(tuán)演奏的嗎?

  薩姆·奧爾特曼:是的,是由Chineke!樂(lè)團(tuán)演奏的。

  比爾·蓋茨:太棒了。

  薩姆·奧爾特曼:現(xiàn)在就播嗎?

  比爾·蓋茨:是的,我們來(lái)聽(tīng)聽(tīng)。

  薩姆·奧爾特曼:這是我們要聽(tīng)的樂(lè)章的序曲。

  比爾·蓋茨:你戴耳機(jī)嗎?

  薩姆·奧爾特曼:我戴。

  比爾·蓋茨:你的同事們會(huì)因?yàn)槟懵?tīng)古典音樂(lè)而取笑你嗎?

  薩姆·奧爾特曼:我不認(rèn)為他們知道我在聽(tīng)什么,因?yàn)槲掖_實(shí)戴著耳機(jī)。在寂靜中工作對(duì)我來(lái)說(shuō)非常困難,我可以做到,但這不是我的自然狀態(tài)。

  比爾·蓋茨:這很有趣。我同意,帶歌詞的歌曲會(huì)讓我覺(jué)得分心,但這更多是一種情緒類型的東西。

  薩姆·奧爾特曼:是的,而且我把它調(diào)得很輕,我也不能聽(tīng)響亮的音樂(lè),不知為何這是我一直以來(lái)的習(xí)慣。

  比爾·蓋茨:太棒了,感謝你帶來(lái)美妙的音樂(lè)。

  比爾·蓋茨:現(xiàn)在,對(duì)我來(lái)說(shuō),如果你真的借助人工智能達(dá)到了令人難以置信的能力,AGI(通用人工智能),AGI+(超級(jí)通用人工智能),我擔(dān)心的有三件事:一是壞人控制了系統(tǒng),如果我們有好人擁有同樣強(qiáng)大的系統(tǒng),這有希望能最小化那個(gè)問(wèn)題;二是系統(tǒng)控制一切的可能性,出于某些原因,我不太擔(dān)心這個(gè)問(wèn)題,但我很高興其他人關(guān)注這個(gè)問(wèn)題;最讓我感到困惑的是人類的目的,我對(duì)這點(diǎn)感到非常興奮,我很擅長(zhǎng)研究瘧疾和根除瘧疾,也很擅長(zhǎng)召集聰明人并為此投入資源。當(dāng)機(jī)器人對(duì)我說(shuō):“比爾,去打匹克球吧,我能根除瘧疾。你只是個(gè)思維遲鈍的人。”那時(shí)它就是一個(gè)哲學(xué)上令人困惑的事情。我們?nèi)绾谓M織社會(huì)?是的,我們要改善教育,但教育要做什么,如果你走向極端,我們?nèi)匀挥泻艽蟮牟淮_定性。第一次,這種情況可能在未來(lái)20年內(nèi)發(fā)生的機(jī)會(huì)不為零。

  薩姆·奧爾特曼:從事技術(shù)工作有很多心理上的困難,但你說(shuō)的這些對(duì)我來(lái)說(shuō)是最困難的,因?yàn)槲乙矎闹蝎@得了很多滿足感。

  比爾·蓋茨:你確實(shí)帶來(lái)了價(jià)值。

  薩姆·奧爾特曼:從某種意義上來(lái)說(shuō),這可能是我做的最后一件難事。

  比爾·蓋茨:我們的思維如此依賴于稀缺性,教師、醫(yī)生和好的想法的稀缺,部分原因是,我確實(shí)在想,如果一代人在沒(méi)有這種稀缺的情況下成長(zhǎng),他們會(huì)對(duì)如何組織社會(huì)以及要做什么這個(gè)哲學(xué)概念會(huì)產(chǎn)生什么看法,也許他們會(huì)想出一個(gè)解決方案。我擔(dān)心我的思維如此受到稀缺性的影響,以至于我甚至很難思考這個(gè)問(wèn)題。

  薩姆·奧爾特曼:這也是我告訴自己的,而且我真心相信,雖然我們?cè)谀撤N意義上放棄了一些東西,但我們將會(huì)擁有比我們?nèi)祟惛斆鞯臇|西。如果我們能進(jìn)入這個(gè)“后稀缺”世界,我們將會(huì)找到新的事情去做。它們會(huì)感覺(jué)非常不同。也許你不是在解決瘧疾問(wèn)題,而是在決定你喜歡哪個(gè)星系,以及你打算如何處理它。我相信我們永遠(yuǎn)不會(huì)缺少問(wèn)題,不會(huì)缺少獲得滿足感和為彼此做事的方式,不會(huì)缺少對(duì)我們?nèi)绾螢槠渌送嫒祟愑螒虻姆绞降睦斫?,這將仍然非常重要。這肯定會(huì)有所不同,但我認(rèn)為唯一的出路就是走下去。我們必須去做這件事,它必將會(huì)發(fā)生,且現(xiàn)在已經(jīng)是一個(gè)不可阻擋的技術(shù)進(jìn)程,因?yàn)槠鋬r(jià)值太大了。我非常非常有信心,我們會(huì)成功的,但感覺(jué)確實(shí)會(huì)很不一樣。

  比爾·蓋茨:將這項(xiàng)技術(shù)應(yīng)用于某些當(dāng)前問(wèn)題,比如為孩子們提供家教,幫助激發(fā)他們的動(dòng)力,或發(fā)現(xiàn)治療阿爾茨海默癥的藥物,我認(rèn)為如何做是非常清楚的。無(wú)論人工智能能否幫助我們減少戰(zhàn)爭(zhēng),減少分化。你會(huì)認(rèn)為隨著智能的提升,不分化是常識(shí),不發(fā)動(dòng)戰(zhàn)爭(zhēng)也是常識(shí),但我確實(shí)認(rèn)為很多人會(huì)持懷疑態(tài)度。我很愿意讓人們致力于解決最困難的人類問(wèn)題,比如我們是否能和睦相處。如果我們認(rèn)為人工智能可以幫助人類更好地相處,我認(rèn)為那將是非常積極的。

  薩姆·奧爾特曼:我相信它會(huì)在這方面給我們帶來(lái)意外的驚喜。這項(xiàng)技術(shù)會(huì)讓我們驚訝于它能做的事情有多么多。我們還得拭目以待,但我非常樂(lè)觀。我同意你的看法,這將是非常大的貢獻(xiàn)。

  比爾·蓋茨:就公平性而言,技術(shù)通常很昂貴,比如個(gè)人電腦或互聯(lián)網(wǎng)連接,而降低成本需要時(shí)間。我想,運(yùn)行這些人工智能系統(tǒng)的成本看起來(lái)很不錯(cuò),每次評(píng)估的成本會(huì)降低很多嗎?

  薩姆·奧爾特曼:它已經(jīng)降低了很多。GPT-3是我們推出時(shí)間最長(zhǎng)、優(yōu)化最久的模型,在它推出的三年多時(shí)間里,我們已經(jīng)將成本降低了40倍。對(duì)于三年的時(shí)間來(lái)說(shuō),這是一個(gè)很好的開(kāi)始。至于GPT-3.5版,我敢打賭,目前我們已經(jīng)將其成本降低了近10倍。GPT-4是新產(chǎn)品,所以我們還沒(méi)有那么多時(shí)間來(lái)降低成本,但我們會(huì)繼續(xù)。我認(rèn)為,在我所知道的所有技術(shù)中,我們的成本下降曲線是最陡峭的,優(yōu)于摩爾定律。這不僅是因?yàn)槲覀兿氤隽巳绾巫屇P透咝У姆椒?,還因?yàn)槲覀儗?duì)研究有了更好的理解,我們可以在更小的模型中獲得更多的知識(shí)和能力。我認(rèn)為,我們將把智能的成本降低到接近于零的程度,這對(duì)社會(huì)來(lái)說(shuō)將是一個(gè)改頭換面的轉(zhuǎn)變。

  現(xiàn)在,我的世界基本模型由智能成本和能源成本組成?!颈葼栃α恕窟@是影響生活質(zhì)量的兩個(gè)最大因素,尤其是對(duì)窮人而言,但總體來(lái)看也是如此。如果你能同時(shí)降低這兩方面的成本,你能擁有的東西就會(huì)更多,你能為人們帶來(lái)的改善就會(huì)更大。我們正走在一條曲線上,至少在智能方面,我們將真正實(shí)現(xiàn)這一承諾。即使按照目前的價(jià)格(這也是有史以來(lái)最高的價(jià)格,而且遠(yuǎn)遠(yuǎn)超出了我們的預(yù)期),每月20美元,你就能獲得大量的GPT-4訪問(wèn)權(quán)限,而且價(jià)值遠(yuǎn)遠(yuǎn)超過(guò)20美元。我們已經(jīng)降得很低了。

  比爾·蓋茨:那競(jìng)爭(zhēng)呢?很多人一下子同時(shí)擠進(jìn)這個(gè)賽道是不是一件有趣的事情?

  薩姆·奧爾特曼:既令人討厭,又充滿動(dòng)力和樂(lè)趣,【比爾笑了】我相信你也有過(guò)類似的感覺(jué)。這確實(shí)促使我們做得更快、更好,我們對(duì)自己的方法很有信心。我們有很多人,我認(rèn)為他們都在往冰球所在的地方滑,而我們也在往冰球要去的地方滑,這感覺(jué)很好。

  比爾·蓋茨:我認(rèn)為人們會(huì)對(duì)OpenAI的規(guī)模之小感到驚訝。你們有多少員工?

  薩姆·奧爾特曼:大約500人,所以我們比以前稍微大一些。

  比爾·蓋茨:但那很小,【笑】要是以谷歌、微軟、蘋(píng)果的標(biāo)準(zhǔn)來(lái)看。

  薩姆·奧爾特曼:確實(shí)很小,我們不僅要經(jīng)營(yíng)研究實(shí)驗(yàn)室,現(xiàn)在還要經(jīng)營(yíng)一家真正的企業(yè)和兩款產(chǎn)品。

  比爾·蓋茨:你所有能力的擴(kuò)展,包括與世界上所有的人交談,傾聽(tīng)所有支持者的聲音,這對(duì)你來(lái)說(shuō)一定很有趣。

  薩姆·奧爾特曼:非常令人著迷。

  比爾·蓋茨:這是一家員工都很年輕的公司嗎?

  薩姆·奧爾特曼:比平均年齡要大一些。

  比爾·蓋茨:好的。

  薩姆·奧爾特曼:這里不是一群24歲的程序員。

  比爾·蓋茨:的確,我的視角有些扭曲了,因?yàn)槲乙呀?jīng)60多歲了。我看到你,你比我年輕,但你說(shuō)得對(duì),你們有很多人四十多歲了。

  薩姆·奧爾特曼:三十多歲、四十多歲、五十多歲(的人)。

  比爾·蓋茨:這不像早期的蘋(píng)果、微軟,那時(shí)我們真的還是孩子。

  薩姆·奧爾特曼:不是的,我也反思過(guò)這個(gè)問(wèn)題。我認(rèn)為公司普遍變老了,我不知道該如何看待這個(gè)問(wèn)題。我認(rèn)為這在某種程度上對(duì)社會(huì)是個(gè)不好的跡象,但我在 YC(Y Combinator)追蹤過(guò)這個(gè)問(wèn)題。隨著時(shí)間的推移,最優(yōu)秀的創(chuàng)始人年齡都呈增長(zhǎng)趨勢(shì)。

  比爾·蓋茨:這很有意思。

  薩姆·奧爾特曼:就我們的情況而言,甚至還比平均年齡還要大一些。

  比爾·蓋茨:你在YC扮演的角色幫助這些公司學(xué)到了很多,我想這對(duì)你現(xiàn)在的工作也是很好的鍛煉?!拘Α?/p>

  薩姆·奧爾特曼:那非常有幫助。

  比爾·蓋茨:包括看到錯(cuò)誤。

  薩姆·奧爾特曼:完全可以這么說(shuō)。OpenAI做了很多與YC建議的標(biāo)準(zhǔn)相反的事情。我們花了四年半時(shí)間才推出我們的第一個(gè)產(chǎn)品。公司成立之初,我們對(duì)產(chǎn)品沒(méi)有任何概念,我們沒(méi)有與用戶交流。我仍然不建議大多數(shù)公司這樣做,但在YC學(xué)習(xí)和見(jiàn)識(shí)過(guò)這些規(guī)則后,我覺(jué)得自己明白了何時(shí)、如何以及為什么我們可以打破這些規(guī)則,我們所做的事情真的與我見(jiàn)過(guò)的其他公司大相徑庭。

  比爾·蓋茨:關(guān)鍵是你集結(jié)的人才團(tuán)隊(duì),讓他們專注于大問(wèn)題,而不是某些短期的收益問(wèn)題。

  薩姆·奧爾特曼:我認(rèn)為硅谷的投資者不會(huì)在我們需要的水平上支持我們,因?yàn)槲覀儽仨氃谘芯可匣ㄙM(fèi)如此多的資金才能推出產(chǎn)品。我們只是說(shuō):“最終模型會(huì)足夠好,我們知道它會(huì)對(duì)人們有價(jià)值。”但我們非常感激與微軟的合作,因?yàn)檫@種超前投資并不是風(fēng)險(xiǎn)投資行業(yè)擅長(zhǎng)的。

  比爾·蓋茨:確實(shí)不是,而且資本成本相當(dāng)可觀,幾乎達(dá)到了風(fēng)險(xiǎn)投資所能承受的極限。

  薩姆·奧爾特曼:可能已經(jīng)超過(guò)了。

  比爾·蓋茨:確實(shí)可能。我非常贊同薩蒂亞對(duì)于“如何將這個(gè)杰出的人工智能組織與大型軟件公司結(jié)合起來(lái)?”的思考,甚至可以說(shuō)一加一遠(yuǎn)遠(yuǎn)大于二。

  薩姆·奧爾特曼:是的,這很棒。你真說(shuō)到點(diǎn)上了,這也是我從YC學(xué)到的。我們可以說(shuō)要找世界上最好的人來(lái)做這件事。我們要確保我們的目標(biāo)和AGI的使命是一致的。但除此之外,我們要讓人們做自己的事情。我們會(huì)意識(shí)到這將經(jīng)歷一些曲折,需要一段時(shí)間。

  我們有一個(gè)大致正確的理論,但一路上的很多策略都被證明是大錯(cuò)特錯(cuò)的,我們只是試圖遵循科學(xué)。

  比爾·蓋茨:我記得我去看了演示,也確實(shí)想過(guò)這個(gè)項(xiàng)目的收入途徑是什么?是什么樣的?在這個(gè)狂熱的時(shí)代,你仍然手握一個(gè)令人難以置信的團(tuán)隊(duì)。

  薩姆·奧爾特曼:是的,優(yōu)秀的人都希望與優(yōu)秀的同事共事。

  比爾·蓋茨:那是一種吸引力。

  薩姆·奧爾特曼:那里有一個(gè)很深的引力中心。此外,這聽(tīng)起來(lái)很陳詞濫調(diào),每家公司都這么說(shuō),但人們感受到了深深地使命感,每個(gè)人都想?yún)⑴cAGI的創(chuàng)建。

  比爾·蓋茨:那一定很激動(dòng)人心。當(dāng)你再次用演示震撼我時(shí),我可以感受到那股能量。我看到了新的人,新的想法,而你們?nèi)砸苑浅2豢伤甲h的速度前進(jìn)著。

  薩姆·奧爾特曼:你最常給出的建議是什么?

  比爾·蓋茨:才能可以分很多種,在我職業(yè)生涯的早期,我認(rèn)為只有純粹的智商,比如工程智商,當(dāng)然,你可以將其應(yīng)用于金融和銷售。但這種想法被證明是如此錯(cuò)誤,建立一個(gè)擁有正確技能組合的團(tuán)隊(duì)是如此重要。針對(duì)他們的問(wèn)題,引導(dǎo)他們思考應(yīng)該如何建立一個(gè)擁有所有不同技能的團(tuán)隊(duì),這可能是我認(rèn)為最有幫助的建議之一。是的,告訴孩子們,數(shù)學(xué)、科學(xué)很酷,如果你喜歡的話,但真正讓我驚訝的是才能的混合。

  那你呢?你給出的建議是什么?

  薩姆·奧爾特曼:關(guān)于大多數(shù)人對(duì)風(fēng)險(xiǎn)的誤判。他們害怕離開(kāi)舒適的工作,去做他們真正想做的事情。實(shí)際上,如果他們不這樣做,他們回顧自己的一生時(shí)就會(huì)想,“天啊,我從來(lái)沒(méi)有去創(chuàng)辦我想創(chuàng)辦的公司,或者我從未嘗試成為一名人工智能研究員。”我認(rèn)為實(shí)際上這樣風(fēng)險(xiǎn)更大。

  與此相關(guān)的是,明確自己想要做什么,并向別人提出自己的要求,會(huì)有意想不到的收獲。很多人受困于把時(shí)間花在自己不想做的事情上,而我最常給的建議可能就是想辦法解決這個(gè)問(wèn)題。

  比爾·蓋茨:如果你能讓人們從事一份讓他們感到有目標(biāo)的工作,那會(huì)更有趣。有時(shí),他們就是這樣產(chǎn)生巨大影響的。

  薩姆·奧爾特曼:當(dāng)然。

  比爾·蓋茨:感謝你的到來(lái),這是一次精彩的對(duì)話。在未來(lái)的日子里,我相信我們還會(huì)有更多的交流,因?yàn)槲覀冋σ宰詈玫姆绞剿茉烊斯ぶ悄堋?/p>

  薩姆·奧爾特曼:非常感謝你的邀請(qǐng),我真的很享受與你對(duì)話。

  比爾·蓋茨:《為自己解惑》是蓋茨筆記的一個(gè)節(jié)目。特別感謝我今天的嘉賓薩姆·奧爾特曼。

  比爾·蓋茨:告訴我你的第一臺(tái)電腦是什么?

  薩姆·奧爾特曼:是Mac LC2。

  比爾·蓋茨:不錯(cuò)的選擇。

  薩姆·奧爾特曼:是個(gè)好東西,我還留著它,它到現(xiàn)在還能用。

  If you ask people to name leaders in artificial intelligence, there’s one name you’ll probably hear more than any other: Sam Altman. His team at OpenAI is pushing the boundaries of what AI can do with ChatGPT, and I loved getting to talk to him about what’s next. Our conversation covered why today’s AI models are the stupidest they’ll ever be, how societies adapt to technological change, and even where humanity will find purpose once we’ve perfected artificial intelligence.

  BILL GATES:My guest today is Sam Altman. He, of course, is the CEO of OpenAI. He’s been an entrepreneur and a leader in the tech industry for a long time, including running Y Combinator, that did amazing things like funding Reddit, Dropbox, Airbnb.

  A little while after I recorded this episode, I was completely taken by surprise when, at least briefly, he was let go as the CEO of OpenAI. A lot happened in the days after the firing, including a show of support from nearly all of OpenAI’s employees, and Sam is back. So, before you hear the conversation that we had, let’s check in with Sam and see how he’s doing.

  [audio – Teams call initiation]

  BILL GATES:Hey, Sam.

  SAM ALTMAN:Hey, Bill.

  BILL GATES:How are you?

  SAM ALTMAN:Oh, man. It’s been so crazy. I’m all right. It’s a very exciting time.

  BILL GATES:How’s the team doing?

  SAM ALTMAN:I think, you know a lot of people have remarked on the fact that the team has never felt more productive or more optimistic or better. So, I guess that’s like a silver lining of all of this.

  In some sense, this was like a real moment of growing up for us, we are very motivated to become better, and sort of to become a company ready for the challenges in front of us.

  BILL GATES:Fantastic.

  So, we won’t be discussing that situation in the conversation; however, you will hear about Sam’s commitment to build a safe and responsible AI. I hope you enjoy the conversation.

  Welcome to Unconfuse Me. I’m Bill Gates.

  BILL GATES:Today we’re going to focus mostly on AI, because it’s such an exciting thing, and people are also concerned. Welcome, Sam.

  SAM ALTMAN:Thank you so much for having me.

  BILL GATES:I was privileged to see your work as it evolved, and I was very skeptical. I didn’t expect ChatGPT to get so good. It blows my mind, and we don’t really understand the encoding. We know the numbers, we can watch it multiply, but the idea of where is Shakespearean encoded? Do you think we’ll gain an understanding of the representation?

  SAM ALTMAN:A hundred percent. Trying to do this in a human brain is very hard. You could say it’s a similar problem, which is there are these neurons, they’re connected. The connections are moving and we’re not going to slice up your brain and watch how it’s evolving, but this we can perfectly x-ray. There has been some very good work on interpretability, and I think there will be more over time. I think we will be able to understand these networks, but our current understanding is low. The little bits we do understand have, as you’d expect, been very helpful in improving these things. We’re all motivated to really understand them, scientific curiosity aside, but the scale of these is so vast. We also could say, where in your brain is Shakespeare encoded, and how is that represented?

  BILL GATES:We don’t know.

  SAM ALTMAN:We don’t really know, but it somehow feels even less satisfying to say we don’t know yet in these masses of numbers that we’re supposed to be able to perfectly x-ray and watch and do any tests we want to on.

  BILL GATES:I’m pretty sure, within the next five years, we’ll understand it. In terms of both training efficiency and accuracy, that understanding would let us do far better than we’re able to do today.

  SAM ALTMAN:A hundred percent. You see this in a lot of the history of technology where someone makes an empirical discovery. They have no idea what’s going on, but it clearly works. Then, as the scientific understanding deepens, they can make it so much better.

  BILL GATES:Yes, in physics, biology, it’s sometimes just messing around, and it’s like, whoa – how does this actually come together?

  SAM ALTMAN: In our case, the guy that built GPT-1 sort of did it off by himself and solved this, and it was somewhat impressive, but no deep understanding of how it worked or why it worked. Then we got the scaling laws. We could predict how much better it was going to be. That was why, when we told you we could do a demo, we were pretty confident it was going to work. We hadn’t trained the model, but we were pretty confident. That has led us to a bunch of attempts and better and better scientific understanding of what’s going on. But it really came from a place of empirical results first.

  BILL GATES: When you look at the next two years, what do you think some of the key milestones will be?

  SAM ALTMAN:Multimodality will definitely be important.

  BILL GATES: Which means speech in, speech out?

  SAM ALTMAN: Speech in, speech out. Images. Eventually video. Clearly, people really want that. We’ve launched images and audio, and it had a much stronger response than we expected. We’ll be able to push that much further, but maybe the most important areas of progress will be around reasoning ability. Right now, GPT-4 can reason in only extremely limited ways. Also reliability. If you ask GPT-4 most questions 10,000 times, one of those 10,000 is probably pretty good, but it doesn’t always know which one, and you’d like to get the best response of 10,000 each time, and so that increase in reliability will be important.

  Customizability and personalization will also be very important. People want very different things out of GPT-4: different styles, different sets of assumptions. We’ll make all that possible, and then also the ability to have it use your own data. The ability to know about you, your email, your calendar, how you like appointments booked, connected to other outside data sources, all of that. Those will be some of the most important areas of improvement.

  BILL GATES:In the basic algorithm right now, it’s just feed forward, multiply, and so to generate every new word, it’s essentially doing the same thing. I’ll be interested if you ever get to the point where, like in solving a complex math equation, you might have to apply transformations an arbitrary number of times, that the control logic for the reasoning may have to be quite a bit more complex than just what we do today.

  SAM ALTMAN:At a minimum, it seems like we need some sort of adaptive compute. Right now, we spend the same amount of compute on each token, a dumb one, or figuring out some complicated math.

  BILL GATES:Yes, when we say, "Do the Riemann hypothesis …"

  SAM ALTMAN:That deserves a lot of compute.

  BILL GATES:It’s the same compute as saying, "The."

  SAM ALTMAN:Right, so at a minimum, we’ve got to get that to work. We may need much more sophisticated things beyond it.

  BILL GATES:You and I were both part of a Senate Education Session, and I was pleased that about 30 senators came to that, and helping them get up to speed, since it’s such a big change agent. I don’t think we could ever say we did too much to draw the politicians in. And yet, when they say, "Oh, we blew it on social media, we should do better," – that is an outstanding challenge that there are very negative elements to, in terms of polarization. Even now, I’m not sure how we would deal with that.

  SAM ALTMAN:I don’t understand why the government was not able to be more effective around social media, but it seems worth trying to understand as a case study for what they’re going to go through now with AI.

  BILL GATES:It’s a good case study, and when you talk about the regulation, is it clear to you what sort of regulations would be constructed?

  SAM ALTMAN:I think we’re starting to figure that out. It would be very easy to put way too much regulation on this space. You can look at lots of examples of where that’s happened before.But also, if we are right, and we may turn out not to be, but if we are right, and this technology goes as far as we think it’s going to go, it will impact society, geopolitical balance of power, so many things, that for these, still hypothetical, but future extraordinarily powerful systems – not like GPT-4, but something with 100,000 or a million times the compute power of that, we have been socialized in the idea of a global regulatory body that looks at those super-powerful systems, because they do have such global impact. One model we talk about is something like the IAEA. For nuclear energy, we decided the same thing. This needs a global agency of some sort, because of the potential for global impact. I think that could make sense. There will be a lot of shorter term issues, issues of what are these models allowed to say and not say? How do we think about copyright? Different countries are going to think about those differently and that’s fine.

  BILL GATES: Some people think if there are models that are so powerful, we’re scared of them –the reason nuclear regulation works globally, is basically everyone, at least on the civilian side, wants to share safety practices, and it has been fantastic. When you get over into the weapons side of nuclear, you don’t have that same thing. If the key is to stop the entire world from doing something dangerous, you’d almost want global government, which today for many issues, like climate, terrorism, we see that it’s hard for us to cooperate. People even invoke U.S.-China competition to say why any notion of slowing down would be inappropriate. Isn’t any idea of slowing down, or going slow enough to be careful, hard to enforce?

  SAM ALTMAN:Yes, I think if it comes across as asking for a slowdown, that will be really hard. If it instead says, "Do what you want, but any compute cluster above a certain extremely high-power threshold" – and given the cost here, we’re talking maybe five in the world, something like that –any cluster like that has to submit to the equivalent of international weapons inspectors. The model there has to be made available for safety audit, pass some tests during training, and before deployment. That feels possible to me. I wasn’t that sure before, but I did a big trip around the world this year, and talked to heads of state in many of the countries that would need to participate in this, and there was almost universal support for it. That’s not going to save us from everything. There are still going to be things that are going to go wrong with much smaller-scale systems, in some cases, probably pretty badly wrong. But I think that can help us with the biggest tier of risks.

  BILL GATES:I do think AI, in the best case, can help us with some hard problems.

  SAM ALTMAN: For sure.

  BILL GATES: Including polarization because potentially that breaks democracy and that would be a super-bad thing. Right now, we’re looking at a lot of productivity improvement from AI, which isoverwhelmingly a very good thing. Which areas are you most excited about?

  SAM ALTMAN:First of all, I always think it’s worth remembering that we’re on this long, continuous curve. Right now, we have AI systems that can do tasks. They certainly can’t do jobs, but they can do tasks, and there’s productivity gain there. Eventually, they will be able to do more things that we think of like a job today, and we will, of course, find new jobs and better jobs. I totally believe that if you give people way more powerful tools, it’s not just that they can work a little faster, they can do qualitatively different things. Right now, maybe we can speed up a programmer 3x. That’s about what we see, and that’s one of the categories that we’re most excited about it. It’s working super-well. But if you make a programmer three times more effective, it’s not just that they can do three times more stuff, it’s that they can – at that higher level of abstraction, using more of their brainpower – they can now think of totally different things. It’s like going from punch cards to higher level languages didn’t just let us program a little faster, it let us do these qualitatively new things. We’re really seeing that.

  As we look at these next steps of things that can do a more complete task, you can imagine a little agent that you can say, "Go write this whole program for me, I’ll ask you a few questions along the way, but it won’t just be writing a few functions at a time." That’ll enable a bunch of new stuff. And then again, it’ll do even more complex stuff. Someday, maybe there’s an AI where you can say, "Go start and run this company for me." And then someday, there’s maybe an AI where you can say, "Go discover new physics." The stuff that we’re seeing now is very exciting and wonderful, but I think it’s worth always putting it in context of this technology that, at least for the next five or ten years, will be on a very steep improvement curve. These are the stupidest the models will ever be.

  Coding is probably the single area from a productivity gain we’re most excited about today. It’s massively deployed and at scaled usage at this point. Healthcare and education are two things that are coming up that curve that we’re very excited about too.

  BILL GATES:The thing that is a little daunting is, unlike previous technology improvements, this one could improve very rapidly, and there’s kind of no upper bound. The idea that it achieves human levels on a lot of areas of work, even if it’s not doing unique science, it can do support calls and sales calls. I guess you and I do have some concern, along with this good thing, that it’ll force us to adapt faster than we’ve had to ever before.

  SAM ALTMAN:That’s the scary part. It’s not that we have to adapt. It’s not that humanity is not super-adaptable. We’ve been through these massive technological shifts, and a massive percentage of the jobs that people do can change over a couple of generations, and over a couple of generations, we seem to absorb that just fine. We’ve seen that with the great technological revolutions of the past. Each technological revolution has gotten faster, and this will be the fastest by far. That’s the part that I find potentially a little scary, is the speed with which society is going to have to adapt, and that the labor market will change.

  BILL GATES:One aspect of AI is robotics, or blue-collar jobs, when you get hands and feet that are at human-level capability. The incredible ChatGPT breakthrough has kind of gotten us focused on the white-collar thing, which is super appropriate, but I do worry that people are losing the focus on the blue-collar piece. So how do you see robotics?

  SAM ALTMAN:Super-excited for that. We started robots too early, so we had to put that project on hold. It was hard for the wrong reasons. It wasn’t helping us make progress with the difficult parts of the ML research. We were dealing with bad simulators and breaking tendons and things like that. We also realized more and more over time that we first needed intelligence and cognition, and then we could figure out how to adapt it to physicality. It was easier to start with that with the way we built these language models. But we have always planned to come back to it.

  We’ve started investing a little bit in robotics companies. On the physical hardware side, there’s finally, for the first time that I’ve ever seen, really exciting new platforms being built there. At some point, we will be able to use our models, as you were saying, with their language understanding and future video understanding, to say, "All right, let’s do amazing things with a robot."

  BILL GATES:If the hardware guys who’ve done a good job on legs actually get the arms, hands, fingers piece, and then we couple it, and it’s not ridiculously expensive, that could change the job market for a lot of the blue-collar type work, pretty rapidly.

  SAM ALTMAN: Yes. Certainly, the prediction, the consensus prediction, if we rewind seven or ten years, was that the impact was going to be blue-collar work first, white-collar work second, creativity maybe never, but certainly last, because that was magic and human.

  Obviously, it’s gone exactly the other direction. I think there are a lot of interesting takeaways about why that happened. Creative work, the hallucinations of the GPT models is a feature, not a bug. It lets you discover some new things. Whereas if you’re having a robot move heavy machinery around, you’d better be really precise with that. I think this is just a case of you’ve got to follow where technology goes. You have preconceptions, but sometimes the science doesn’t want to go that way.

  BILL GATES:So what application on your phone do you use the most?

  SAM ALTMAN:Slack.

  BILL GATES: Really?

  SAM ALTMAN:Yes. I wish I could say ChatGPT.

  BILL GATES:[laughs] Even more than e-mail?

  SAM ALTMAN:Way more than e-mail. The only thing that I was thinking possibly was iMessages, but yes, more than that.

  BILL GATES: Inside OpenAI, there’s a lot of coordination going on.

  SAM ALTMAN: Yes. What about you?

  BILL GATES:It’s Outlook. I’m this old-style e-mail guy, either that or the browser, because, of course, a lot of my news is coming through the browser.

  SAM ALTMAN: I didn’t quite count the browser as an app. It’s possible I use it more, but I still would bet Slack. I’m on Slack all day.

  BILL GATES: Incredible.

  BILL GATES:Well, we’ve got a turntable here. I asked Sam, like I have for other guests, to bring one of his favorite records. So, what have we got?

  SAM ALTMAN: I brought The New Four Seasons - Vivaldi Recomposed by Max Richter. I like music with no words for working. That had the old comfort of Vivaldi and pieces I knew really well, but enough new notes that it was a totally different experience. There are pieces of music that you form these strong emotional attachments to, because you listened to them a lot in a key period of your life. This was something that I listened to a lot while we were starting OpenAI.

  I think it’s very beautiful music. It’s soaring and optimistic, and just perfect for me for working. I thought the new version is just super great.

  BILL GATES:Is it performed by an orchestra?

  SAM ALTMAN:It is. The Chineke! Orchestra.

  BILL GATES:Fantastic.

  SAM ALTMAN:Should I play it?

  BILL GATES: Yes, let’s.

  [music – "The New Four Seasons – Vivaldi Recomposed: Spring 1" by Max Richter]

  SAM ALTMAN:This is the intro to the sound we’re going for.

  [music]

  BILL GATES:Do you wear headphones?

  SAM ALTMAN:I do.

  BILL GATES: Do your colleagues give you a hard time about listening to classical music?

  SAM ALTMAN:I don’t think they know what I listen to, because I do wear headphones. It’s very hard for me to work in silence. I can do it, but it’s not my natural state.

  BILL GATES: It’s fascinating. Songs with words, I agree, I would find that distracting, but this is more of a mood type thing.

  SAM ALTMAN:Yes, and I have it quiet. I can’t listen to loud music either, but it’s just somehow always what I’ve done.

  BILL GATES:It’s fantastic. Thanks for bringing it.

  BILL GATES:Now, with AI, to me, if you do get to the incredible capability, AGI, AGI+, there are three things I worry about. One is that a bad guy is in control of the system. If we have good guys who have equally powerful systems that hopefully minimizes that problem. There’s the chance of the system taking control. For some reasons, I’m less concerned about that. I’m glad other people are. The one that sort of befuddles me is human purpose. I get a lot of excitement that, hey, I’m good at working on malaria, and malaria eradication, and getting smart people and applying resources to that. When the machine says to me, "Bill, go play pickleball, I’ve got malaria eradication.You’re just a slow thinker," then it is a philosophically confusing thing. How do you organize society? Yes, we’re going to improve education, but education to do what, if you get to this extreme, which we still have a big uncertainty. For the first time, the chance that might come in the next 20 years is not zero.

  SAM ALTMAN:There’s a lot of psychologically difficult parts of working on the technology, but this is for me, the most difficult, because I also get a lot of satisfaction from that.

  BILL GATES:You have real value added.

  SAM ALTMAN:In some real sense, this might be the last hard thing I ever do.

  BILL GATES: Our minds are so organized around scarcity; scarcity of teachers and doctors and good ideas that, partly, I do wonder if a generation that grows up without that scarcity will find the philosophical notion of how to organize society and what to do. Maybe they’ll come up with a solution. I’m afraid my mind is so shaped around scarcity, I even have a hard time thinking of it.

  SAM ALTMAN: That’s what I tell myself too, and it’s what I truly believe, that although we are giving something up here, in some sense, we are going to have things that are smarter than us. If we can get into this world of post-scarcity, we will find new things to do. They will feel very different. Maybe instead of solving malaria, you’re deciding which galaxy you like, and what you’re going to do with it.  I’m confident we’re never going to run out of problems, and we’re never going to run out of different ways to find fulfilment and do things for each other and understand how we play our human games for other humans in this way that’s going to remain really important. It is going to be different for sure, but I think the only way out is through. We have to go do this thing. It’s going to happen. This is now an unstoppable technological course. The value is too great. And I’m pretty confident, very confident, we’ll make it work, but it does feel like it’s going to be so different.

  BILL GATES:The way to apply this to certain current problems, like getting kids a tutor and helping to motivate them, or discover drugs for Alzheimer’s, I think it’s pretty clear how to do that. Whether AI can help us go to war less, be less polarized; you’d think as you drive intelligence, and not being polarized kind of is common sense, and not having war is common sense, but I do think a lot of people would be skeptical. I’d love to have people working on the hardest human problems, like whether we get along with each other. I think that would be extremely positive, if we thought the AI could contribute to humans getting along with each other.

  SAM ALTMAN:I believe that it will surprise us on the upside there. The technology will surprise us with how much it can do. We’ve got to find out and see, but I’m very optimistic. I agree with you, what a contribution that would be.

  BILL GATES: In terms of equity, technology is often expensive, like a PC or Internet connection, and it takes time to come down in cost. I guess the costs of running these AI systems, it looks pretty good that the cost per evaluation is going to come down a lot?

  SAM ALTMAN:It’s come down an enormous amount already. GPT-3, which is the model we’ve had out the longest and the most time to optimize, in the three and a little bit years that it has been out, we’ve been able to bring the cost down by a factor of 40. For three years’ time, that’s a pretty good start. For 3.5, we’ve brought it down, I would bet, close to 10 at this point. Four is newer, so we haven’t had as much time to bring the cost down there, but we will continue to bring the cost down. I think we are on the steepest curve of cost reduction ever of any technology I know, way better than Moore’s Law. It’s not only that we figured out how to make the models more efficient, but also, as we understand the research better, we can get more knowledge, we can get more ability into a smaller model. I think we are going to drive the cost of intelligence down to so close to zero that it will be this before-and-after transformation for society.

  Right now, my basic model of the world is cost of intelligence, cost of energy. [Bill laughs] Those are the two biggest inputs to quality of life, particularly for poor people, but overall. If you can drive both of those way down at the same time, the amount of stuff you can have, the amount of improvement you can deliver for people, it’s quite enormous. We are on a curve, at least for intelligence, we will really, really deliver on that promise. Even at the current cost, which again, this is the highest it will ever be and much more than we want, for 20 bucks a month, you get a lot of GPT-4 access, and way more than 20 bucks’ worth of value. We’ve come down pretty far.

  BILL GATES: What about the competition? Is that kind of a fun thing that many people are working on this all at once?

  SAM ALTMAN:It’s both annoying and motivating and fun. [Bill laughs] I’m sure you’ve felt similarly. It does push us to be better and do things faster. We are very confident in our approach. We have a lot of people that I think are skating to where the puck was, and we’re going to where the puck is going. It feels all right.

  BILL GATES:I think people would be surprised at how small OpenAI is. How many employees do you have?

  SAM ALTMAN:About 500, so we’re a little bigger than before.

  BILL GATES:But that’s tiny. [laughs] By Google, Microsoft, Apple standards –

  SAM ALTMAN:It’s tiny. We have to not only run the research lab, but now we have to run a real business and two products.

  BILL GATES:The scaling of all your capacities, including talking to everybody in the world, and listening to all those constituencies, that’s got to be fascinating for you right now.

  SAM ALTMAN: It’s very fascinating.

  BILL GATES: Is it mostly a young company?

  SAM ALTMAN: It’s an older company than average.

  BILL GATES:Okay.

  SAM ALTMAN:It’s not a bunch of 24-year-old programmers.

  BILL GATES:It’s true, my perspective is warped, because I’m in my 60s. I see you, and you’re younger, but you’re right. You have a lot in their 40s.

  SAM ALTMAN:Thirties, 40s, 50s.

  BILL GATES:  It’s not the early Apple, Microsoft, which we were really kids.

  SAM ALTMAN:It’s not, and I’ve reflected on that. I think companies have gotten older in general, and I don’t know quite what to make of that. I think it’s somehow a bad sign for society, but I tracked this at YC. The best founders have trended older over time.

  BILL GATES: That’s fascinating.

  SAM ALTMAN:Then in our case, it’s a little bit older than the average, even still.

  BILL GATES: You got to learn a lot by your role at Y Combinator, helping these companies. I guess that was good training for what you’re doing now. [laughs]

  SAM ALTMAN:That was super helpful.

  BILL GATES:Including seeing mistakes.

  SAM ALTMAN:Totally. OpenAI did a lot of things that are very against the standard YC advice. We took four and a half years to launch our first product. We started the company without any idea of what a product would be. We were not talking to users. I still don’t recommend that for most companies, but having learned the rules and seen them at YC made me feel like I understood when and how and why we could break them. We really did things that were just so different than any other company I’ve seen.

  BILL GATES:The key was the talent that you assembled, and letting them be focused on the big, big problem, not some near-term revenue thing.

  SAM ALTMAN: I think Silicon Valley investors would not have supported us at the level we needed, because we had to spend so much capital on the research before getting to the product. We just said, "Eventually the model will be good enough that we know it’s going to be valuable to people." But we were very grateful for the partnership with Microsoft, because this kind of way-ahead-of-revenue investing is not something that the venture capital industry is good at.

  BILL GATES:No, and the capital costs were reasonably significant, almost at the edge of what venture would ever be comfortable with.

  SAM ALTMAN:Maybe past.

  BILL GATES:Maybe past. I give Satya incredible credit for thinking through ‘how do you take this brilliant AI organization, and couple it into the large software company?’ It has been very, very synergistic.

  SAM ALTMAN:It’s been wonderful, yes. You really touched on it, though, and this was something I learned from Y Combinator. We said, we are going to get the best people in the world at this. We are going to make sure that we’re all aligned at where we’re going and this AGI mission. But beyond that, we’re going to let people do their thing. We’re going to realize it’s going to go through some twists and turns and take a while.

  We had a theory that turned out to be roughly right, but a lot of the tactics along the way turned out to be super wrong. We just tried to follow the science.

  BILL GATES:I remember going and seeing the demonstration and thinking, okay, what’s the path to revenue on that one? What is that like? In these frenzied times, you’re still holding on to an incredible team.

  SAM ALTMAN:Yes. Great people really want to work with great colleagues.

  BILL GATES:That’s an attractive force.

  SAM ALTMAN:There’s a deep center of gravity there. Also, it sounds so cliche, and every company says it, but people feel the mission so deeply. Everyone wants to be in the room for the creation of AGI.

  BILL GATES:It must be exciting. I can see the energy when you come up and blow me away again with the demos; I’m seeing new people, new ideas. You’re continuing to move at a really incredible speed.

  SAM ALTMAN:What’s the piece of advice you give most often?

  BILL GATES:There are so many different forms of talent. Early in my career, I thought, just pure IQ, like engineering IQ, and of course, you can apply that to financial and sales. That turned out to be so wrong. Building teams where you have the right mix of skills is so important. Getting people to think, for their problem, how do they build that team that has all the different skills, that’s probably the one that I think is the most helpful. Yes, telling kids, you know, math, science is cool, if you like it, but it’s that talent mix that really surprised me.

  What about you? What advice do you give?

  SAM ALTMAN: It’s something about how most people are mis-calibrated on risk. They’re afraid to leave the soft, cushy job behind to go do the thing they really want to do, when, in fact, if they don’t do that, they look back at their lives like, "Man, I never went to go start this company I wanted to start, or I never tried to go be an AI researcher." I think that’s sort of much riskier.

  Related to that, being clear about what you want to do, and asking people for what you want goes a surprisingly long way. A lot of people get trapped in spending their time in not the way they want to do. Probably the most frequent advice I give is to try to fix that some way or other.

  BILL GATES: If you can get people into a job where they feel they have a purpose, it’s more fun. Sometimes that’s how they can have gigantic impact.

  SAM ALTMAN:That’s for sure.

  BILL GATES: Thanks for coming. It was a fantastic conversation. In the years ahead, I’m sure we’ll get to talk a lot more, as we try to shape AI in the best way possible.

  SAM ALTMAN: Thanks a lot for having me. I really enjoyed it.

  [music]

  BILL GATES:Unconfuse Me is a production of the Gates Notes. Special thanks to my guest today, Sam Altman.

  BILL GATES:Remind me what your first computer was?

  SAM ALTMAN: A Mac LC2.

  BILL GATES: Nice choice.

  SAM ALTMAN:It was a good one. I still have it; it still works.