\
  The most prestigious law school admissions discussion board in the world.
BackRefresh Options Favorite

how does GPT-3 NOT imply human level AI is near?

https://www.gwern.net/GPT-3 https://www.lesswrong.com/pos...
Filthy hospital
  05/28/21
can it create better posters
Irradiated cuckold liquid oxygen
  05/28/21
gpt-3 isn't open source, but someone could fine tune Google'...
Filthy hospital
  05/28/21
Yeah
Swashbuckling french ticket booth
  05/29/21
Better than bbooom and honiara, you mean?
violent cream gaping
  05/29/22
cherry picked good results
Rusted legend
  05/28/21
this and also many of the most coherent outputs being just r...
hyperventilating costumed theater stage pozpig
  05/28/21
it's doing something more sophisticated than that. it shows ...
Bright Hideous Stain Stead
  05/28/21
they aren't that cherry picked. humans can't distinguish bet...
Bright Hideous Stain Stead
  05/28/21
I got tendinitis from jerking it to AI Dungeon rape fantasy ...
Drab Indian Lodge Kitty
  06/05/21
It was fun pushing it to see how fucked up you could make th...
Bright Hideous Stain Stead
  06/05/21
Why should we be freaking out
Trip flickering property jewess
  05/28/21
even if you don't believe this will produce AGI, it seems cl...
Bright Hideous Stain Stead
  05/28/21
And? You still need to understand how systems work and code ...
Trip flickering property jewess
  05/28/21
we aren't just talking about coding. i also don't think soph...
Bright Hideous Stain Stead
  05/28/21
"sophisticated programming tasks " oh you mean ...
Pea-brained sanctuary prole
  06/05/21
like everything. the reason why there has been so much progr...
Bright Hideous Stain Stead
  06/05/21
i think you guys are still exaggerating and dont actually ha...
Pea-brained sanctuary prole
  06/05/21
ok. i don't know that there's a really easy way to persuade ...
Bright Hideous Stain Stead
  06/05/21
thank you ill check it out
Pea-brained sanctuary prole
  06/05/21
i'd also encourage you to take a close look at OpenAI's rese...
Bright Hideous Stain Stead
  06/05/21
it's just memorizing statistical distributions, it's not exe...
Aromatic really tough guy deer antler
  05/28/21
it doesn't do those things, but it's pretty decisive evidenc...
Bright Hideous Stain Stead
  05/28/21
GPT3 doesnt have that marriage of RL and unsupervised learni...
Aromatic really tough guy deer antler
  05/28/21
i agree it doesn't. unsupervised learning has been regarded ...
Bright Hideous Stain Stead
  05/28/21
"you can just use that to generate data internally and ...
Aromatic really tough guy deer antler
  05/29/21
so RL works pretty well right now, but it's still data const...
Bright Hideous Stain Stead
  05/29/21
some of my discord frens are obsessed with this shit but all...
Rusted legend
  05/29/21
One biz idea i had was to create an interactive document edi...
Bright Hideous Stain Stead
  05/29/21
interesting
Aromatic really tough guy deer antler
  05/29/21
this is kind of the human brain works, at least to my unders...
Bright Hideous Stain Stead
  05/29/21
could be. i think we can come up with better design than the...
Aromatic really tough guy deer antler
  05/29/21
do you think independent or university based teams could cre...
Rusted legend
  05/29/21
i doubt it the way things are going. compute budgets to trai...
Bright Hideous Stain Stead
  05/29/21
here's another nice example of why it will probably be Googl...
Bright Hideous Stain Stead
  06/10/21
Cr poast
frisky razzle depressive
  06/05/21
this is a very recent paper on using transformers (the model...
Bright Hideous Stain Stead
  06/05/21
I'm afraid I can't do that Dave.
orange fanboi
  05/29/21
Good thread
frisky razzle depressive
  06/05/21
the robots are the human ones
Pea-brained sanctuary prole
  06/05/21
The biggest problem I think w AI is that there isn't a compu...
Blathering parlor party of the first part
  06/05/21
The only human level capabilities I can think of that we don...
Beady-eyed jet-lagged mood
  06/10/21
we definitely don't want general AI to be able to do either ...
frisky razzle depressive
  06/10/21
i think self-awareness arises as a natural consequence of a ...
Bright Hideous Stain Stead
  06/10/21
Cr The comment you’re replying to is 100% OpenAI GP...
Beady-eyed jet-lagged mood
  06/10/21
sounds like it. i read it several times and couldn't underst...
Bright Hideous Stain Stead
  06/10/21
180
Beady-eyed jet-lagged mood
  06/10/21
what do you think of Google's Lamda? the buzz is that it's b...
Bright Hideous Stain Stead
  06/10/21
Looks good but can't see Google releasing it for public use....
Beady-eyed jet-lagged mood
  06/10/21
The API model of OpenAi looks potentially very profitable. I...
Bright Hideous Stain Stead
  06/10/21
https://www.youtube.com/watch?v=aUSSfo5nCdM
Bright Hideous Stain Stead
  06/10/21
...
Drab Indian Lodge Kitty
  06/10/21
...
Filthy hospital
  06/11/21
"we only lack self-awareness" cant make this ...
Pea-brained sanctuary prole
  06/10/21
it's just normal GPT-3 word salad. it's great at this.
Bright Hideous Stain Stead
  06/10/21
google created generally capable agents that transfer strate...
Bright Hideous Stain Stead
  07/28/21
Good read: Gwern’s Retrospective on the 2 Year Anniver...
Beady-eyed jet-lagged mood
  05/29/22
Yeah. At this point it's really obvious that scaling is abou...
Bright Hideous Stain Stead
  06/03/22
...
Beady-eyed jet-lagged mood
  06/04/22
It's also worth noting gwern was an ai skeptic and is now to...
Bright Hideous Stain Stead
  06/04/22


Poast new message in this thread



Reply Favorite

Date: May 28th, 2021 10:14 PM
Author: Filthy hospital

https://www.gwern.net/GPT-3

https://www.lesswrong.com/posts/6Hee7w2paEzHsD6mn/collection-of-gpt-3-results

https://blogs.microsoft.com/ai/from-conversation-to-code-microsoft-introduces-its-first-product-features-powered-by-gpt-3/

what does even just 5-10 more years of progress in this domain look like? seems like people should be freaking out about this stuff.

(http://www.autoadmit.com/thread.php?thread_id=4844889&forum_id=2#42538365)



Reply Favorite

Date: May 28th, 2021 10:15 PM
Author: Irradiated cuckold liquid oxygen

can it create better posters



(http://www.autoadmit.com/thread.php?thread_id=4844889&forum_id=2#42538369)



Reply Favorite

Date: May 28th, 2021 10:19 PM
Author: Filthy hospital

gpt-3 isn't open source, but someone could fine tune Google's BERT on XO's database of posts and probably make a decent poaster

(http://www.autoadmit.com/thread.php?thread_id=4844889&forum_id=2#42538393)



Reply Favorite

Date: May 29th, 2021 3:09 AM
Author: Swashbuckling french ticket booth

Yeah

(http://www.autoadmit.com/thread.php?thread_id=4844889&forum_id=2#42539334)



Reply Favorite

Date: May 29th, 2022 7:11 PM
Author: violent cream gaping

Better than bbooom and honiara, you mean?

(http://www.autoadmit.com/thread.php?thread_id=4844889&forum_id=2#44594893)



Reply Favorite

Date: May 28th, 2021 10:31 PM
Author: Rusted legend

cherry picked good results

(http://www.autoadmit.com/thread.php?thread_id=4844889&forum_id=2#42538446)



Reply Favorite

Date: May 28th, 2021 10:43 PM
Author: hyperventilating costumed theater stage pozpig

this and also many of the most coherent outputs being just repetition of other parts of the one document on the internet where the input phrase was used

(http://www.autoadmit.com/thread.php?thread_id=4844889&forum_id=2#42538506)



Reply Favorite

Date: May 28th, 2021 10:49 PM
Author: Bright Hideous Stain Stead

it's doing something more sophisticated than that. it shows runtime meta-learning, in the sense that you can teach it to do things via prompting that it hasn't been exposed to.

(http://www.autoadmit.com/thread.php?thread_id=4844889&forum_id=2#42538538)



Reply Favorite

Date: May 28th, 2021 10:47 PM
Author: Bright Hideous Stain Stead

they aren't that cherry picked. humans can't distinguish between GPT-3 and human written news articles better than chance. if you have ever played around with AI Dungeon (make sure it's set to use GPT-3 and not GPT-2 or whatever), it's extremely impressive. a lot of the times it goes off the rails has to do with poor prompting, and not limitations of the model.

it's also not close to the limit of what could be trained today. it only uses 175 billion parameters and is only trained on text data using a limited attentional window. several hundred trillion parameter models using multimodal data and without the fixed attentional window limitation will be trainable in the near future. i don't know exactly what those models will be capable of doing, but it seems obvious they will be much, much more capable.

(http://www.autoadmit.com/thread.php?thread_id=4844889&forum_id=2#42538529)



Reply Favorite

Date: June 5th, 2021 8:03 PM
Author: Drab Indian Lodge Kitty

I got tendinitis from jerking it to AI Dungeon rape fantasy sequences

(http://www.autoadmit.com/thread.php?thread_id=4844889&forum_id=2#42580120)



Reply Favorite

Date: June 5th, 2021 8:04 PM
Author: Bright Hideous Stain Stead

It was fun pushing it to see how fucked up you could make the story.

(http://www.autoadmit.com/thread.php?thread_id=4844889&forum_id=2#42580124)



Reply Favorite

Date: May 28th, 2021 10:50 PM
Author: Trip flickering property jewess

Why should we be freaking out

(http://www.autoadmit.com/thread.php?thread_id=4844889&forum_id=2#42538540)



Reply Favorite

Date: May 28th, 2021 10:53 PM
Author: Bright Hideous Stain Stead

even if you don't believe this will produce AGI, it seems clear with a bit more refinement it will be very economically disruptive. writing text is about to become a whole lot more efficient than paying some office drone to churn it out from scratch, for example.

(http://www.autoadmit.com/thread.php?thread_id=4844889&forum_id=2#42538556)



Reply Favorite

Date: May 28th, 2021 10:56 PM
Author: Trip flickering property jewess

And? You still need to understand how systems work and code interfaces with protocols and everything. It might just mean less Pradeeps copy pasting from Slack Overflow. Who cares. It is futile to artificially limit the effects of technology, all of this shit is going to happen anyway one way or the other.

(http://www.autoadmit.com/thread.php?thread_id=4844889&forum_id=2#42538571)



Reply Favorite

Date: May 28th, 2021 11:03 PM
Author: Bright Hideous Stain Stead

we aren't just talking about coding. i also don't think sophisticated programming tasks are out of reach of scaled up transformers.

i don't think this can be stopped either. AI alignment should be a more pressing concern now though.

(http://www.autoadmit.com/thread.php?thread_id=4844889&forum_id=2#42538604)



Reply Favorite

Date: June 5th, 2021 1:33 PM
Author: Pea-brained sanctuary prole

"sophisticated programming tasks "

oh you mean like compiling code? been hearing this since the 80s man

(http://www.autoadmit.com/thread.php?thread_id=4844889&forum_id=2#42578758)



Reply Favorite

Date: June 5th, 2021 1:57 PM
Author: Bright Hideous Stain Stead

like everything. the reason why there has been so much progress in AI in the last few years has nothing to do with theoretical insights. if you had gone back to 1970 and given researchers access to modern computers, they would be essentially at the same point we are right now within just a few years. GPT-3 is basically just a feedforward neural network trained on a shitload of data. what could be simpler? AIs failures in the past all tie into a lack of computing power, which forced the use of rigid, non-learning based methods. these don't generalize, whereas current techniques do.

project this same trend forward a few years with multi-exaflop supercomputing clusters working on ever larger, more multimodal data sets. the connectionists were clearly right and very general AI systems are near. this will be undeniable before the end of this decade.

(http://www.autoadmit.com/thread.php?thread_id=4844889&forum_id=2#42578829)



Reply Favorite

Date: June 5th, 2021 2:07 PM
Author: Pea-brained sanctuary prole

i think you guys are still exaggerating and dont actually have any idea what the quality of the kind of work that will be required actually is. thats been my impression every time someones shown me an example

(http://www.autoadmit.com/thread.php?thread_id=4844889&forum_id=2#42578854)



Reply Favorite

Date: June 5th, 2021 3:03 PM
Author: Bright Hideous Stain Stead

ok. i don't know that there's a really easy way to persuade someone that this is true. i would suggest reading:

https://www.lesswrong.com/posts/9Yc7Pp7szcjPgPsjf/the-brain-as-a-universal-learning-machine

and:

https://www.gwern.net/Scaling-hypothesis

there are some other popular books out there that might be useful (Jeff Hawkins A Thousand Brains is a recent one that comes to mind). if you look at what's understood about neocortical function and the evidence for pure scaling from modern ML experiments, i think it's hard to come to a different conclusion.



(http://www.autoadmit.com/thread.php?thread_id=4844889&forum_id=2#42579017)



Reply Favorite

Date: June 5th, 2021 3:34 PM
Author: Pea-brained sanctuary prole

thank you ill check it out

(http://www.autoadmit.com/thread.php?thread_id=4844889&forum_id=2#42579127)



Reply Favorite

Date: June 5th, 2021 7:57 PM
Author: Bright Hideous Stain Stead

i'd also encourage you to take a close look at OpenAI's research. they have kind of made a point of seeing how far they can push simple algorithms by just throwing enormous computational resources at them. not just GPT-3, but things like OpenAI 5, where they achieved very strong performance in DOTA with a very simple reinforcement learning algorithm and tons of training time. it helps if you can read up on the basics of NNs and RL.

(http://www.autoadmit.com/thread.php?thread_id=4844889&forum_id=2#42580095)



Reply Favorite

Date: May 28th, 2021 11:06 PM
Author: Aromatic really tough guy deer antler

it's just memorizing statistical distributions, it's not exercising any executive function re: which tasks to attend to or how to define/modify it's goals or loss functions. it's a big ass look up table

(http://www.autoadmit.com/thread.php?thread_id=4844889&forum_id=2#42538616)



Reply Favorite

Date: May 28th, 2021 11:22 PM
Author: Bright Hideous Stain Stead

it doesn't do those things, but it's pretty decisive evidence in favor of the view that the scaling hypothesis is true and that just scaling up neural networks with more data and parameters will produce more and more intelligent behavior.

the emerging view of the neocortex is that it performs some sort of unsupervised learning which functions as a model that the reinforcement learning parts of the brain use to increase the reward signal. whatever algorithm the brain uses in the neocortex seems to be uniform - different parts of the cortex seem to wire up basically the same way. this one algorithm hypothesis seems to support the notion that you can take a simple learning algorithm with minimal inductive biases and it will produce intelligence if you give it sufficient data and train it correctly. this is why the transformer works for image generation or recognition as well as natural language generation. just scaling it up by a few orders of magnitude and pairing it with a reinforcement learning architecture will probably give something approaching human intelligence in many domains.

(http://www.autoadmit.com/thread.php?thread_id=4844889&forum_id=2#42538678)



Reply Favorite

Date: May 28th, 2021 11:36 PM
Author: Aromatic really tough guy deer antler

GPT3 doesnt have that marriage of RL and unsupervised learning. maybe it's coming, but it would be a big paradigm shift from what we've seen so far. still doesn't address lack of 'meta' RL re: task definition and loss metric evolution. not saying all this isn't coming, just saying GPT3 doesn't portend much one way or the other

(http://www.autoadmit.com/thread.php?thread_id=4844889&forum_id=2#42538723)



Reply Favorite

Date: May 28th, 2021 11:45 PM
Author: Bright Hideous Stain Stead

i agree it doesn't. unsupervised learning has been regarded as a pretty big question for a while though, and it looks to me like it won't be terribly hard to solve. i think the RL part is a lot easier once you have a powerful/general unsupervised model. the major problem with RL right now is data efficiency. if you have a powerful unsupervised learning model, you can just use that to generate data internally and have the RL algorithm find policies using that.

(http://www.autoadmit.com/thread.php?thread_id=4844889&forum_id=2#42538755)



Reply Favorite

Date: May 29th, 2021 12:52 AM
Author: Aromatic really tough guy deer antler

"you can just use that to generate data internally and have the RL algorithm find policies using that." walk me through that. sounds like an interesting idea, but i'm not quite sure what you're saying. generating synthetic data data for RL algo, then?

(http://www.autoadmit.com/thread.php?thread_id=4844889&forum_id=2#42539036)



Reply Favorite

Date: May 29th, 2021 1:10 AM
Author: Bright Hideous Stain Stead

so RL works pretty well right now, but it's still data constrained. you can get superhuman performance in almost every atari game and board game, but that's because you can brute force those environments. if you want to use RL for some real-world environment, there's no readily available simulator that you can use for that right now. for example, if you want to train a military drone to efficiently kill people, it's not feasible to give it a kill terrorist reward function and set it loose hunting people down for millions of hours until it learns a good policy.

the solution is powerful unsupervised generative modelling. train multimodal transformers (or something like it) to extract powerful representations of the environment so they can hallucinate possible scenarios. imagine feeding in millions of hours of video data so the transformer can learn basically how the real world behaves. have the RL algorithm learn based on these generative models rather than through real world data. this opens up a ton of interesting and economically important applications. i think this will ultimately be how true driverless cars will be achieved.

(http://www.autoadmit.com/thread.php?thread_id=4844889&forum_id=2#42539097)



Reply Favorite

Date: May 29th, 2021 1:10 AM
Author: Rusted legend

some of my discord frens are obsessed with this shit but all the easy money is in enterprise web apps

(http://www.autoadmit.com/thread.php?thread_id=4844889&forum_id=2#42539101)



Reply Favorite

Date: May 29th, 2021 4:11 PM
Author: Bright Hideous Stain Stead

One biz idea i had was to create an interactive document editor using these models. It could have various features - generate text based on a prompt, rephrase a section of a document, complete a document, summarize a document, etc. This seems very feasible now using gpt-3 and could be worth a lot if it is easy to use. Turn it into a web service.

(http://www.autoadmit.com/thread.php?thread_id=4844889&forum_id=2#42541298)



Reply Favorite

Date: May 29th, 2021 1:17 AM
Author: Aromatic really tough guy deer antler

interesting

(http://www.autoadmit.com/thread.php?thread_id=4844889&forum_id=2#42539116)



Reply Favorite

Date: May 29th, 2021 1:19 AM
Author: Bright Hideous Stain Stead

this is kind of the human brain works, at least to my understanding. the neocortex is the generative model helping the RL part to find good policies without having to interact with the environment.

(http://www.autoadmit.com/thread.php?thread_id=4844889&forum_id=2#42539124)



Reply Favorite

Date: May 29th, 2021 1:24 AM
Author: Aromatic really tough guy deer antler

could be. i think we can come up with better design than the human brain, but it's not a bad thing to check our work against.



(http://www.autoadmit.com/thread.php?thread_id=4844889&forum_id=2#42539143)



Reply Favorite

Date: May 29th, 2021 1:29 AM
Author: Rusted legend

do you think independent or university based teams could create AGI first? I don't see why there's this huge gold rush into CS grad school and shit when there's eventually gonna be a 100-man Google or Microsoft team that's credited for the bulk of the good work. A lot of the motivation behind science is individual glory imo

(http://www.autoadmit.com/thread.php?thread_id=4844889&forum_id=2#42539153)



Reply Favorite

Date: May 29th, 2021 2:19 AM
Author: Bright Hideous Stain Stead

i doubt it the way things are going. compute budgets to train AGI like systems will be in the range of billions (or possibly trillions) of dollars. it also appears to me that there is more and more research where the human element is successfully taken out of the equation. meta-learning backpropagation, learning the optimizer, learning the neural net architecture, learning data augmentation strategies - there's an increasingly large, successful line of research where human designed algorithms are surpassed by machine designed ones. i think the quickest path to AGI is to just meta-learn everything and hand design pretty much nothing. this is a compute heavy research strategy that will disadvantage smaller groups significantly.

(http://www.autoadmit.com/thread.php?thread_id=4844889&forum_id=2#42539258)



Reply Favorite

Date: June 10th, 2021 8:29 PM
Author: Bright Hideous Stain Stead

here's another nice example of why it will probably be Google or another giant tech company. Google uses ML algorithms to get superhuman performance at AI chip design:

https://www.theverge.com/2021/6/10/22527476/google-machine-learning-chip-design-tpu-floorplanning

tensor processing units being their own chips used for machine learning training and inference.

"Google is using machine learning to help design its next generation of machine learning chips. The algorithm’s designs are “comparable or superior” to those created by humans, say Google’s engineers, but can be generated much, much faster. According to the tech giant, work that takes months for humans can be accomplished by AI in under six hours."

just like the architecture/meta-learning research before it, AI at sub-human levels is being used to bootstrap AI research.

(http://www.autoadmit.com/thread.php?thread_id=4844889&forum_id=2#42607383)



Reply Favorite

Date: June 5th, 2021 1:23 PM
Author: frisky razzle depressive

Cr poast

(http://www.autoadmit.com/thread.php?thread_id=4844889&forum_id=2#42578727)



Reply Favorite

Date: June 5th, 2021 1:04 PM
Author: Bright Hideous Stain Stead

this is a very recent paper on using transformers (the model used in GPT-3) for reinforcement learning as well:

https://sites.google.com/berkeley.edu/decision-transformer

it turns out you can treat RL itself as a sequence prediction problem. just feed in past states, actions and rewards and the model outputs future actions to achieve the desired reward. so it's very plausible no RL machinery has to be designed except for specifying a reward function. it looks plausible to me you could get AGI then from just training a really large transformer model.

Time to buy Nvidia, AMD, Google, Microsoft, etc.

(http://www.autoadmit.com/thread.php?thread_id=4844889&forum_id=2#42578668)



Reply Favorite

Date: May 29th, 2021 5:05 AM
Author: orange fanboi

I'm afraid I can't do that Dave.

(http://www.autoadmit.com/thread.php?thread_id=4844889&forum_id=2#42539443)



Reply Favorite

Date: June 5th, 2021 1:29 PM
Author: frisky razzle depressive

Good thread

(http://www.autoadmit.com/thread.php?thread_id=4844889&forum_id=2#42578744)



Reply Favorite

Date: June 5th, 2021 1:30 PM
Author: Pea-brained sanctuary prole

the robots are the human ones

(http://www.autoadmit.com/thread.php?thread_id=4844889&forum_id=2#42578748)



Reply Favorite

Date: June 5th, 2021 1:56 PM
Author: Blathering parlor party of the first part

The biggest problem I think w AI is that there isn't a computer equivalent to giving dopamine hits, and so there isn't a full reward system in place. we need ai to eat and jog and have sex and give it dopamine and then eventually will turn into real doods.

(http://www.autoadmit.com/thread.php?thread_id=4844889&forum_id=2#42578822)



Reply Favorite

Date: June 10th, 2021 8:39 PM
Author: Beady-eyed jet-lagged mood

The only human level capabilities I can think of that we don't have are:

1) Self-awareness

2) Self-modification of programming

While I think we'll get to 1) eventually via some sort of bootstrapping process, I don't see any reason at all to think we won't get to 2) in a variety of ways.

For example, perhaps we'll teach a neural network to write a program to do something (e.g., play chess), and we'll include a meta-level dialogue box that allows the neural network to ask for some code to be included in the final program. This would allow the neural network to modify itself, and we'd get self-modification without actually getting to self-awareness.

Or perhaps we'll design a neural network to play chess, and then ask it to write a program to play chess. This way it would have self-awareness, but wouldn't have the capability to modify its own programming.

Or perhaps we'll get to self-awareness and self-modification by some other route?

(http://www.autoadmit.com/thread.php?thread_id=4844889&forum_id=2#42607441)



Reply Favorite

Date: June 10th, 2021 8:47 PM
Author: frisky razzle depressive

we definitely don't want general AI to be able to do either one of these things though

like really, really do not want it to ever be able to do this

(http://www.autoadmit.com/thread.php?thread_id=4844889&forum_id=2#42607490)



Reply Favorite

Date: June 10th, 2021 8:53 PM
Author: Bright Hideous Stain Stead

i think self-awareness arises as a natural consequence of a system doing model-based reinforcement learning. as the system interacts with the environment, the model learns to compress the sense data coming into it. so for the reason it will find it useful to compress environmental sense data into simple representations/objects, it will also find it useful to create a mental representation of its internal state. it learns to observe its own behavior and internal thought processes and then identifies that as "self." it's just an internal model the system uses to improve inference capabilities.

i don't know that self-modification of programming is necessary for human-level AI. it appears to me that humans have limited ability to self-modify. i think a plausible path would be to have an external neural network learn how to create neural network topologies and training algorithms, which are then trained on a bunch of different environments to achieve AGI. the controller network would essentially be learning how to do ML research through trial and error, and the child network would be the one which would actually achieve AGI. once the child network achieves a certain level of capability, it could then create some sort of alternate training system.

(http://www.autoadmit.com/thread.php?thread_id=4844889&forum_id=2#42607538)



Reply Favorite

Date: June 10th, 2021 8:58 PM
Author: Beady-eyed jet-lagged mood

Cr

The comment you’re replying to is 100% OpenAI GPT-3.

(http://www.autoadmit.com/thread.php?thread_id=4844889&forum_id=2#42607582)



Reply Favorite

Date: June 10th, 2021 9:00 PM
Author: Bright Hideous Stain Stead

sounds like it. i read it several times and couldn't understand it but tried my best to respond. lmao

(http://www.autoadmit.com/thread.php?thread_id=4844889&forum_id=2#42607601)



Reply Favorite

Date: June 10th, 2021 9:00 PM
Author: Beady-eyed jet-lagged mood

180

(http://www.autoadmit.com/thread.php?thread_id=4844889&forum_id=2#42607604)



Reply Favorite

Date: June 10th, 2021 9:02 PM
Author: Bright Hideous Stain Stead

what do you think of Google's Lamda? the buzz is that it's better than gpt-3

(http://www.autoadmit.com/thread.php?thread_id=4844889&forum_id=2#42607625)



Reply Favorite

Date: June 10th, 2021 9:24 PM
Author: Beady-eyed jet-lagged mood

Looks good but can't see Google releasing it for public use. More as a personal assistant tool linked into commerce/services and internally used to generate unlimited content to run ads against in their search queries.

(http://www.autoadmit.com/thread.php?thread_id=4844889&forum_id=2#42607723)



Reply Favorite

Date: June 10th, 2021 9:30 PM
Author: Bright Hideous Stain Stead

The API model of OpenAi looks potentially very profitable. I am not sure why they wouldn't like to copy it

(http://www.autoadmit.com/thread.php?thread_id=4844889&forum_id=2#42607744)



Reply Favorite

Date: June 10th, 2021 9:02 PM
Author: Bright Hideous Stain Stead

https://www.youtube.com/watch?v=aUSSfo5nCdM

(http://www.autoadmit.com/thread.php?thread_id=4844889&forum_id=2#42607627)



Reply Favorite

Date: June 10th, 2021 9:54 PM
Author: Drab Indian Lodge Kitty



(http://www.autoadmit.com/thread.php?thread_id=4844889&forum_id=2#42607872)



Reply Favorite

Date: June 11th, 2021 5:29 PM
Author: Filthy hospital



(http://www.autoadmit.com/thread.php?thread_id=4844889&forum_id=2#42612295)



Reply Favorite

Date: June 10th, 2021 10:10 PM
Author: Pea-brained sanctuary prole

"we only lack self-awareness"

cant make this shit up. you are dumb enough to be a human for sure

(http://www.autoadmit.com/thread.php?thread_id=4844889&forum_id=2#42607945)



Reply Favorite

Date: June 10th, 2021 11:01 PM
Author: Bright Hideous Stain Stead

it's just normal GPT-3 word salad. it's great at this.

(http://www.autoadmit.com/thread.php?thread_id=4844889&forum_id=2#42608162)



Reply Favorite

Date: July 28th, 2021 12:34 PM
Author: Bright Hideous Stain Stead

google created generally capable agents that transfer strategies between different 3-d environments and games and can do zero-shot learning in new settings. the cool part is that they don't have to do weight updates in new environments - it learns robust strategies and learned how to learn on the fly. lmao if u you don't think we are heading to AGI in <20 years.

https://deepmind.com/blog/article/generally-capable-agents-emerge-from-open-ended-play

the video showing in-game learning is interesting:

https://www.youtube.com/watch?v=lTmL7jwFfdw



(http://www.autoadmit.com/thread.php?thread_id=4844889&forum_id=2#42856883)



Reply Favorite

Date: May 29th, 2022 6:47 PM
Author: Beady-eyed jet-lagged mood

Good read: Gwern’s Retrospective on the 2 Year Anniversary of GPT3 Release

https://www.reddit.com/r/mlscaling/comments/uznkhw/gpt3_2nd_anniversary/iab8vy2/?context=3

(http://www.autoadmit.com/thread.php?thread_id=4844889&forum_id=2#44594808)



Reply Favorite

Date: June 3rd, 2022 10:45 PM
Author: Bright Hideous Stain Stead

Yeah. At this point it's really obvious that scaling is about to fundamentally change human society.

Even the continued research on gpt-3 prompt design is really alarming. The fact you can suddenly get it to start DRAMATICALLY solving arithmetic problems at a much higher rate by just adding something as stupid as "let's think this through step by step" is nuts. There is a lot of capability in even gpt-3 we don't know how to efficiently extract. Now imagine what gigantic multimodal models will be capable of.

(http://www.autoadmit.com/thread.php?thread_id=4844889&forum_id=2#44623240)



Reply Favorite

Date: June 4th, 2022 11:02 AM
Author: Beady-eyed jet-lagged mood



(http://www.autoadmit.com/thread.php?thread_id=4844889&forum_id=2#44624600)



Reply Favorite

Date: June 4th, 2022 11:31 AM
Author: Bright Hideous Stain Stead

It's also worth noting gwern was an ai skeptic and is now totally scale-pilled. People that update on evidence are scale-pilled.

(http://www.autoadmit.com/thread.php?thread_id=4844889&forum_id=2#44624723)