11 Comments

The God of Mischief Returns

I say this with Love my good man.

Expand full comment

Yes, indeed! Although I think I will make a little less trouble this time around.

Expand full comment

Hey Martin very cool to see you on Substack, I just "recommended" your Substack and am looking forward to following your writing. I will probably be a regular in your comments section haha.

If you get a minute, checkout my Substack, I mainly investigate corruption/fraud in academia but also blog about prediction markets, economics, culture wars etc.

https://karlstack.substack.com/

Expand full comment

Is the crypto section satire? Also, welcome back home Shkeli.

Expand full comment

not at all! just fantasizing about the potential future outcomes. they don't have to have high probabilities.

Expand full comment

Glad you are back. Good read

Expand full comment

Good to see you're back man. Interacted with you briefly on the blog when you were in prison, and on reddit years earlier when wsb was in its infancy

If you ever feel like doing YouTube streams in the future we'd love to hear from you again

Expand full comment

The last prediction for "Computing" is all wrong lmao

Expand full comment

The DNA computing part? I have to do some real research in that space. Probably better for storage than compute? Silicon transistors are pretty damn small but I wonder if there are thermodynamic benefits to DNA.

Expand full comment

I meant the whole section, starting with the web3, and especially the predictions about AGI. There's some progress in AI, that in 20 years will give us more sophisticated tools, but I don't see a capable AGI on the horizon in 20 years, unless we have some kind of black swan style breakthroughs, which is very very unlikely. My bet: at least 100 years.

Expand full comment

We probably have some time before true AGI, but I think we're not far from there in some incomplete sense. LLMs are certainly causing a stir, and I think when added to Nilsson's triple-tower model, you have a real case for something that is close to at least faking it. The REPL style LMs/GPT-3 gives us now is obviously far from human-like, it feels more like calling an API.

It also depends on your opinion of LLMs (I can guess where you stand!). This article on "attention heads" was quite interesting: https://www.quantamagazine.org/researchers-glimpse-how-ai-gets-so-good-at-language-processing-20220414/

If you feel like LLMs are doing something that resembles human reasoning (which cannot be *much* different than some undefined vector optimization going on in our wetware), then I'd say we're not far off. But if you feel like LLMs are a parlor trick, then I would guess the onus is on you to explain what we're missing. Penrose/Rumsfeld explanations that we don't know what we don't know seem lacking, to me. Computers have checked a lot of boxes in things humans can't do well, and obviously need to check more boxes in things humans do easily a la McCarthy. Yet, the new text-to-image models are quite impressive, and check off something a 5-year-old can do that a computer previously couldn't.

Let me know what you think!

Expand full comment