ChatGPT, "Artificial Intelligence" Humanity and Beyond

In essence, yes it was. That is an essential starting point of many perspectives.
I don't think that article stated or implied that premise at all, nor did the argument implicitly rely on it. As I read the article, one's view of people (or the reality of people's goodness or lack thereof) is 100% irrelevant to the point he was making about the most probable outcome of the path we are on with AI.

What am I missing?
 
I don't think that article stated or implied that premise at all, nor did the argument implicitly rely on it. As I read the article, one's view of people (or the reality of people's goodness or lack thereof) is 100% irrelevant to the point he was making about the most probable outcome of the path we are on with AI.

What am I missing?

Well, the conclusion kinda belies a negative view of humanity.

I know I see things on deeper or more fundamental levels and I sometimes don't communicate what I see properly.
 
From my friend Felix this morning.

"(Contemplating my odd passivity in the face of the AI hype)

I used to envision myself moving to the US, starting a company and building incredible stuff.

I went into machine learning, because it's the most exciting technology to me.

I'm "the automation guy" - and machine learning has the highest potential of our times to automate.

On top of it, I'm the "articulation guy" who's all about writing down my thoughts (eg: this post) - and solving all problems with explicit articulation.

For all these reasons, LLMs are the coolest thing to me.
It's straight-up magic.

Ever since GPT made it big, investment has been pouring, academics have quit to go into startups and there aren't enough GPUs available to meet the model training demand.

(Btw. you might want to buy nvidia stocks if you haven't already, but that's a different topic)

===

...and yet - I'm not particularly motivated to actually build this stuff.

Why?

Well, Google and Microsoft, with their PhDs and their billions aren't even able to approximate GPT.

Then, you got open source stuff like Alpaca that can already run on your local and sort of approximate GPT 3.5.

And I already have access to GPT 4 - the best one there is - and it's so exciting to just use this thing that I can barely think of anything else to do anymore.

===

I guess I consider "building LLMs" as a game for other people who are more specialized already.

I know what it's like to train models, I've been doing it for 5 years - and it's kind of a pain in the ass compared to the excitement of actually experiencing a finished model.

I guess I'm more interested in using existing LLMs - than to build new ones 🤔

===

That being said, there's this dream in me about an "objective LLM" - as I'd expect lives in many techy Oists.

One major idea of Rand's was that concepts are objective - if they are formed correctly.

Obviously this places an LLM at the horizon that has an objective "understanding" of reality - and can grasp abstract connections like "employment is a human relationship just like friendship - united under 'trade'".

An LLM that's an abstract genius and sees connections between things that would have required human geniuses until that point.

===

The "objective LLM" excites me - but I also know that the whole problem boils down to training data - which means someone would have to write tons and tons tons of texts which use only objectively formed concepts.

That means you'd basically need an army of geniuses to just write something like encyclopedias - to even have the building blocks for this kind of thing.

And... as much as I'd want this to be real, I don't want to spend my life writing encyclopedias together with other Oists.

Just imagine the infighting over what the objective meaning of something is 🤔

===

That leads me back to two things:

1. Just continue using GPT because it's awesome as hell
2. Build applications on top of existing LLMs

Yeah 🤔 I guess that'll probably be my chosen fate

===

Younger-me would probably beat present-me up and yell "why did you let your drive die?!"

I'm not sure what happened to it.

Younger-me would be buying hardware and training LLMs right now - out of sheer excitement about what's possible - without even an explicit purpose outside of "it's cool!".

My hunch is that it has something to do with the cabin fever here in hot-ass Arizona.

I gotta get out of here.

August can't come early enough 🤔
(The time we want to move)"
 
From my friend Felix this morning.

"(Contemplating my odd passivity in the face of the AI hype)

I used to envision myself moving to the US, starting a company and building incredible stuff.

I went into machine learning, because it's the most exciting technology to me.

I'm "the automation guy" - and machine learning has the highest potential of our times to automate.

On top of it, I'm the "articulation guy" who's all about writing down my thoughts (eg: this post) - and solving all problems with explicit articulation.

For all these reasons, LLMs are the coolest thing to me.
It's straight-up magic.

Ever since GPT made it big, investment has been pouring, academics have quit to go into startups and there aren't enough GPUs available to meet the model training demand.

(Btw. you might want to buy nvidia stocks if you haven't already, but that's a different topic)

===

...and yet - I'm not particularly motivated to actually build this stuff.

Why?

Well, Google and Microsoft, with their PhDs and their billions aren't even able to approximate GPT.

Then, you got open source stuff like Alpaca that can already run on your local and sort of approximate GPT 3.5.

And I already have access to GPT 4 - the best one there is - and it's so exciting to just use this thing that I can barely think of anything else to do anymore.

===

I guess I consider "building LLMs" as a game for other people who are more specialized already.

I know what it's like to train models, I've been doing it for 5 years - and it's kind of a pain in the ass compared to the excitement of actually experiencing a finished model.

I guess I'm more interested in using existing LLMs - than to build new ones 🤔

===

That being said, there's this dream in me about an "objective LLM" - as I'd expect lives in many techy Oists.

One major idea of Rand's was that concepts are objective - if they are formed correctly.

Obviously this places an LLM at the horizon that has an objective "understanding" of reality - and can grasp abstract connections like "employment is a human relationship just like friendship - united under 'trade'".

An LLM that's an abstract genius and sees connections between things that would have required human geniuses until that point.

===

The "objective LLM" excites me - but I also know that the whole problem boils down to training data - which means someone would have to write tons and tons tons of texts which use only objectively formed concepts.

That means you'd basically need an army of geniuses to just write something like encyclopedias - to even have the building blocks for this kind of thing.

And... as much as I'd want this to be real, I don't want to spend my life writing encyclopedias together with other Oists.

Just imagine the infighting over what the objective meaning of something is 🤔

===

That leads me back to two things:

1. Just continue using GPT because it's awesome as hell
2. Build applications on top of existing LLMs

Yeah 🤔 I guess that'll probably be my chosen fate

===

Younger-me would probably beat present-me up and yell "why did you let your drive die?!"

I'm not sure what happened to it.

Younger-me would be buying hardware and training LLMs right now - out of sheer excitement about what's possible - without even an explicit purpose outside of "it's cool!".

My hunch is that it has something to do with the cabin fever here in hot-ass Arizona.

I gotta get out of here.

August can't come early enough 🤔
(The time we want to move)"
(I'd say AMD stock since they consistently use more VRAM for cheaper in their GPUs, but like you said, that's another discussion) :coffee:
 
(I'd say AMD stock since they consistently use more VRAM for cheaper in their GPUs, but like you said, that's another discussion) :coffee:

I've always been more of an nVidia guy but that is certainly something I hadn't considered.
 
Back
Top