"The False Promise of ChatGPT" Chomsky, Roberts, Watumull
-
@jack-waugh Chomsky's old and heavily invested in the "brains are magic" worldview. I see no reason why "creative, explanatory and moral" can't come from ChatGPT or one of its successors. I've already seen ChatGPT plenty of things that, if a human did them, would be called creative.
This is just an exercise in moving the goalposts.
Here's the Wayback Machine / Internet Archive's latest version of his article: https://archive.is/cLNwI
-
Yeah, I mean ChatGPT has some big holes in its ability (as I've complained about), but it's also a bit scary what it's capable of sometimes. I don't think anything we can do is off limits in principle to a ChatGPT type thing.
-
@toby-pereira It's weird for Chomsky et al to be so dismissive of something that is obviously going to have very significant advances soon.
You know that as we speak, ChatGPT is being intensively trained (including using humans for reinforcing, etc) on its own codebase. Think about that. It's going to get better, fast. Billions upon billions of dollars are going to be thrown at it.
The most notable advance in its language processing came from the invention of transformers, in 2017. (at Google I think.... ChatGPT is riding that wave though) It was a huge leap, and resulted in what can only be described as emergent properties, such as its ability to reason about source code. (or, maybe I should say, do things that would be called "reasoning" if any human did them)
There are still, obviously, things that humans do better. Lots of things actually. Chomsky chose to latch onto those things. I think that's missing the point, since those things are becoming fewer.
Even if the "promise" is simply the ability to communicate freely to a computer to get it to do what you want, rather than using old fashioned user interfaces that rely on typing etc, it's still a big deal and its not a false promise.
-
Chomsky et al's arguments do not turn on a prejudice that brains are magic.
ChatGPT uses statistics to choose a probable next word in a conversation based on literature (if material on the WWW can be called that) on which it has been trained using supercomputing. This approach, of its inherent nature, has little to do with judgment between right and wrong, neither in the sense of factual correctness nor the avoidance of harm.
There is at least one real (as distinct from magic) physical difference between brains and the GPT procedure, in that the latter is much more expensive in terms of GHG emission and uses more raw compute power. The brain has some trick to produce intelligent behavior at much lower cost than the current success of GPT (and I credit them with Star-Trek levels of a computer's ability to talk like a human, which I did not expect to see in my lifetime, as I did not expect to see the detection of gravitational waves), and to date, engineers have not succeeded in reproducing this trick. In principle, they could, but it hasn't been achieved yet. The public needs to understand what GPT programs do not do. Can you leverage it to get intellectual work done faster? Yes, but you have to check everything it says. Another thing is to avoid becoming dependent on it, because its owners can and will move it behind a paywall at such time as they will have drawn what value they judge they can from feedback from free users.
-
@jack-waugh Chomsky's arguments aren't explicitly "brains are magic", but they kind of are in a sense. They are "brains are ever so complicated and special". They are complicated, but so is ChatGPT. And ChatGPT (and other language models, of course, but ChatGPT is the leader at the moment) isn't fully understood by the people who made it. Its fuzzy and statistical and layers upon layers that no one could ever really understand, or even if they could, they don't right now. It's designed to be trained and to learn, more than it is designed to actually do a task. Just like humans.
"uses statistics to choose a probable next word in a conversation based on literature"
Somewhat minor quibble with that. While it is related to systems that work linearly (always looking at the last word), models that use transformers don't have that constraint. So really its more of "jumping around in text and constantly replacing and reworking parts of it until it best matches patterns, while being trained by humans which are the best and worst outputs."
But when it is worded that way, it starts to sound a bit more like "continuously refine something until it is better based on both extensive examples, and guidance from human teachers." Which kinda sounds like what people do.
-
Sabine Hossenfelder weighs in.
-
By careful ordering and wording of the prompts one gives to ChatGPT, one can use it to stroke ones ego. In the following example, I started with an opinion in mind about implementing the "environment" concept as it is used on Unix, in JavaScript. I got ChatGPT to agree with me by careful wording and ordering of the prompts. This is a long dialog and I am not asking readers who don't find the first part interesting to tough out the whole thing. I am not driving toward some profound insight that would make it worth reading through.
I haven't yet tried the Chrome extension that @rob alerted us to. I am used to using Firefox most of the time. So, I'm pasting in:
Jack: Sometimes, a given phenomenon can be described in more than one way, and each description represents a viewpoint.
ChatGPT: Yes, that is correct. In many cases, a given phenomenon can be described in multiple ways, each of which represents a different viewpoint or perspective. For example, a particular event may be described in terms of its physical characteristics, such as its location, size, and duration, or in terms of its social and cultural significance, such as its impact on a particular community or group. Alternatively, it may be described in terms of its psychological effects on individuals or in terms of its historical context.
Each of these different viewpoints or perspectives can provide valuable insights into the phenomenon being described. However, it is important to recognize that no single viewpoint or perspective can fully capture the complexity of a phenomenon, and that different viewpoints may also be subject to biases or limitations. By considering multiple viewpoints or perspectives, we can gain a more comprehensive understanding of the phenomenon and its significance.
Jack: Consider an executable program on Unix or Linux. One way to describe it is as a procedure that can be applied to an argument list. According to this description, the protocol for calling the procedure with arguments involves these steps:
- the calling process must fork.
- the child process uses the Unix
exec
system call, giving it a pathname into the file system to reach the target program, and also supplying the arguments. - the parent process exercises the
wait
system call.
ChatGPT: Yes, that is a correct description of the process for executing an executable program on Unix or Linux. To expand on each step:
-
Forking: The first step is for the calling process to create a new child process using the
fork
system call. This creates an identical copy of the calling process, including all of its memory, open file descriptors, and other resources. -
Executing the program: The child process then uses the
exec
system call to replace its own memory space with that of the target program, specified by a pathname into the file system. The arguments to the program are also passed to theexec
system call. -
Waiting: Meanwhile, the parent process waits for the child process to complete using the
wait
system call. This allows the parent to retrieve the exit status of the child process and any other information about its execution.
Once the child process has finished executing the program, it terminates and returns an exit status to the parent process, which can then continue executing its own code. This process is fundamental to how Unix and Linux systems execute programs and is used extensively in shell scripts and other automation tools.
Jack: Not only does the called code receive the explicitly-mentioned arguments, but also there is an "enviroment" available, which amounts to an additional argument or an additional set of arguments, which are handled implicitly.
ChatGPT: Yes, that is correct. In addition to the explicitly-mentioned arguments passed to an executable program on Unix or Linux, there is also an environment available to the program. This environment consists of a set of variables that are set by the shell or parent process and made available to the child process at the time of execution.
These variables can include information such as the user's home directory, the current working directory, the default editor, and other system settings. The environment variables are handled implicitly, meaning that they are not passed as explicit arguments to the program. Instead, they can be accessed by the program using system calls or library functions such as
getenv
.The environment variables are typically used by programs to determine their behavior based on the current system configuration. For example, a program may use the
PATH
environment variable to determine where to find other executables or libraries needed for its execution. Other environment variables can be used to specify configuration options or control program behavior.Overall, the combination of explicit arguments and the implicit environment provides a flexible and powerful mechanism for executing programs on Unix and Linux systems.
Jack: In programming languages or models or systems other than Unix-like ones, the concept of an environment could be useful, quite as it was considered useful enough to be included in Unix.
GPT: Yes, the concept of an environment can be useful in programming languages, models, or systems other than Unix-like ones. In fact, many programming languages and systems have their own mechanisms for handling environment variables or settings.
For example, in the Java programming language, environment variables can be accessed using the
System.getenv()
method, which returns a map of environment variable names to their values. In the Windows operating system, environment variables can be set and accessed using the System Properties dialog or thesetx
command.In general, the concept of an environment is useful for providing a flexible and configurable runtime environment for programs. By setting environment variables, users or administrators can control the behavior of programs without needing to modify their source code or configuration files. This can be especially useful in large-scale systems or distributed environments where there may be many different configurations or deployment scenarios to support.
Overall, the concept of an environment has proven to be a useful and adaptable tool for managing program behavior and configuration, and it is likely to continue to be used in a variety of programming languages, models, and systems in the future.
Jack: JavaScript has, natively, three kinds of procedure. There is the synchronous function, the
async
function, and the generator function. In using a synchronous function or anasync
function, I don't see any easy or convenient technique to provide an environment and make it available to the called function.GPT: In JavaScript, when using a synchronous function or an
async
function, the environment variables are typically set outside of the function and then accessed within the function as needed. This is because JavaScript does not have a built-in mechanism for passing environment variables as arguments to a function.One common approach is to use global variables or constants to store the environment variables, and then access them within the function using the global scope. For example, if you have a global variable
API_KEY
that stores an API key, you can access it within a function usingwindow.API_KEY
(in a web browser environment) orglobal.API_KEY
(in a Node.js environment).Another approach is to use a configuration object that is passed to the function as an argument. This object can contain any environment variables or settings needed by the function. For example:
function myFunction(config) { // Access the API key from the configuration object const apiKey = config.apiKey; // Use the API key to make a request fetch(`https://api.example.com?apiKey=${apiKey}`) .then(response => { // Handle the response }); } // Call the function with a configuration object myFunction({ apiKey: 'my-api-key' });
Finally, in some cases, it may be possible to use a third-party library or framework to manage environment variables more easily. For example, the
dotenv
library in Node.js allows you to define environment variables in a.env
file, which can then be loaded into your application at runtime.In general, while JavaScript does not have a native mechanism for passing environment variables to functions, there are several common approaches that can be used to make them available within a function.
Jack: Part of the concept of an "environment", if it is to work quite as it does on Unix, is that if the caller does not explicitly mention the environment when making a call, the called procedure receives the same environment as the caller had received from its caller.
GPT: Yes, that is correct. In the Unix environment model, if a caller does not explicitly pass environment variables to a called procedure, the called procedure will inherit the same environment as the caller.
This behavior is achieved through the use of the
fork
system call, which creates a new process that is a copy of the calling process. The new process, known as the child process, inherits a copy of the parent process's environment, including any environment variables that were set by the parent process or by any of its ancestors in the process tree.When the child process then uses the
exec
system call to replace its image with that of the called procedure, it retains the inherited environment. This means that the called procedure has access to the same environment variables that were set by the caller and its ancestors.This behavior is often desirable because it allows for a hierarchical organization of environment variables, where higher-level processes can set default values for variables that are inherited by lower-level processes. This can simplify the management of complex systems and ensure that all processes have access to the same configuration settings.
However, it's worth noting that not all programming languages or systems follow this model. In some cases, environment variables must be explicitly passed as arguments to called procedures, or they may not be used at all. Ultimately, the use of an environment depends on the specific requirements of the system or language being used.
Jack: The
exec
call has specific forms when called from C, mentioning a modified environment or not. When a form is used that does not accept an argument for the modified environment, the current environment is used for the actual system call.GPT: Yes, that is correct. In Unix, the exec system call can be called with several different variants, some of which allow for the modification of the environment and others which use the current environment.
The basic form of the
exec
system call,execl
, takes a pathname to an executable file and a variable number of arguments that represent the command-line arguments passed to the executable. In this form, the current environment is used.The
execle
variant is similar toexecl
, but also takes an additional argument that represents the modified environment. This allows for the caller to explicitly specify a new environment to be used by the executable.Similarly, the
execv
andexecve
variants are similar toexecl
andexecle
, respectively, but take an array of arguments and an optional environment array. These variants are often used in conjunction with the fork system call to create a new process and replace its image with that of the executable.In general, the use of the
exec
system call variants that allow for the modification of the environment can be useful in situations where the caller needs to explicitly set environment variables or modify the environment for the called process. However, in many cases, the current environment is sufficient and there is no need to explicitly pass a modified environment.Jack: It seems to me that if we want to preserve that feature, using a global for the environment in Javascript can cause problems. For example, let's say function
a
calls functionb
and functionb
calls functionc
with a modified environment. Shallb
save the current environment, modify the global, callc
, then afterc
returns, restore the global environment? I am not sure that the necessary invariants could be maintained when concurrent or asynchronous behaviors are being programmed for.GPT: You raise a good point. In a single-threaded, synchronous JavaScript environment, it might be possible to implement a global environment variable that is inherited by called functions. However, in a multi-threaded or asynchronous environment, the use of a global environment variable could introduce concurrency issues and make it difficult to ensure the correct behavior of the system.
One alternative approach that could be used in JavaScript would be to explicitly pass the environment as an argument to each function. This would allow for a more fine-grained control over the environment and would make it easier to reason about the behavior of the system in a concurrent or asynchronous context.
Another approach would be to use a closure to encapsulate the environment within a function. This would allow the environment to be shared between functions within the closure without exposing it to the global scope or other functions outside of the closure.
Ultimately, the choice of approach will depend on the specific requirements of the system and the programming model being used. However, it's worth noting that the use of a global environment variable could introduce issues in a concurrent or asynchronous environment and may not be the best choice for implementing an environment model in JavaScript.
Jack: Using the closure would be clean in the face of async behavior, but an inconvenient property of this approach is that subroutines written to run with an environment, even if they did not refer to any of the items in the environment, would still have to be coded in a different style from functions intended to be called without the environment. For example:
m.linearTransform = env => (m, x, b) => m * x + b;
GPT: Yes, you raise a good point. Using closures to encapsulate the environment within functions can make the code more complex and require a different coding style for functions that need to access the environment.
In the example you provided, the
linearTransform
function takes an environment as an argument and returns a new function that performs a linear transformation using the environment. While this approach is clean and allows for fine-grained control over the environment, it does require a different coding style from functions that don't need to access the environment.One possible alternative to this approach would be to use a module to encapsulate the environment and provide functions that operate on the environment. This would allow for a clean separation between functions that access the environment and those that don't, and would make it easier to reason about the behavior of the system.
For example, you could define a module that exports a function to create a new environment and functions to operate on the environment:
const createEnvironment = () => { const environment = { /* ... */ }; const linearTransform = (m, x, b) => m * x + b; return { getEnvironment: () => environment, linearTransform: linearTransform }; }; const env = createEnvironment(); const m = 2, x = 3, b = 1; const result = env.linearTransform(m, x, b);
This approach allows for a clean separation between functions that need to access the environment and those that don't, and makes it easy to reason about the behavior of the system.
Jack: The technique you are suggesting would seem to require that the functions to be called all live in the environment, themselves.
GPT: That's correct. In the approach I suggested, functions that need to access the environment would need to be defined within the same module that creates the environment. This allows the functions to have access to the environment through closures, while also ensuring that they are encapsulated within the same module.
Here's an example that demonstrates this approach:
const createEnvironment = () => { const environment = { /* ... */ }; const linearTransform = (m, x, b) => m * x + b; const applyLinearTransform = (m, b) => { /* Functions defined within the same module can access the environment */ const transform = x => linearTransform(m, x, b); return transform; }; return { getEnvironment: () => environment, applyLinearTransform: applyLinearTransform }; }; const env = createEnvironment(); const transform = env.applyLinearTransform(2, 1); const result = transform(3);
In this example, the
applyLinearTransform
function returns a new function that applies a linear transformation using the environment. ThelinearTransform
function is defined within the same module asapplyLinearTransform
, so it has access to the environment through closures.While this approach can make the code more complex, it also provides a clean separation between functions that need to access the environment and those that don't. This can make it easier to reason about the behavior of the system and avoid unintended side effects.
Jack: I think I can describe a technique having better software-engineering characteristics than any of those you have suggested. However, my technique requires all functions to be coded as generator functions.
GPT: I'm curious to hear about your technique. How do generator functions enable the use of environments in a more effective way than synchronous functions or async functions?
Jack: First, we have to build what I will call a context of execution, and this will include the initial environment.
const env = { utilitiesPath: [ "./framework/utilities", "./application/utilities" ], }; const ctxt = {env};
GPT: I see, so you're proposing to build a context of execution that includes the environment. Can you elaborate on how generator functions can utilize this context to enable the use of environments in a more effective way?
Jack: Next, the outer code calls the first routine that is using the scheme.
ctxt.iter = main(); const resume = globalThis.application.framework.resume = ctxt => value => { const {value, done} = iter.next(); };
I see. So you're using generator functions and a resume function to manage the execution of the functions in the context of the environment. Can you explain how this works in more detail?
Jack: Oops, I sent that before finishing it. Let me try again.
GPT: Sure, take your time.
Jack: The
resume
function is going to interpret everyyield
that occurs in the execution of one of the generator functions. It is able to do this because it is the function that is calling.next
on the generator, and so it knows what the.next
call is returning, and this includes whatever is yielded.const resume = ctxt => injectedValue => { const {value, done} = iter.next(injectedValue); if (done) { /* is return */ ctxt.defer(ctxt.succeed, value); return }; /* else is yield */ switch (typeof value) { case 'undefined': /* is naked `(yield)` */ ctxt.defer(resume, ctxt); break; /* ... more cases */ } };
The
resume
code above requires that the context also include adefer
method and asucceed
callback. Here is a simple implementation for each.ctxt.defer = (func, ...args) => setTimeout(func, 0, ...args); ctxt.succeed = () => {};
In called code, the environment, if needed, can be obtained with
const {env} = yield;
Otherwise, it is not necessary to mention it.
const linearTransform = function* (m, b, x) { return m * x + b };
Calls from one function to another within the scheme use
yield*
.const y = yield* linearTransform(2, 3, 4);
GPT: Thank you for sharing your technique for passing an environment in JavaScript using generator functions. It seems like an interesting approach, although it may not be suitable for all scenarios.
By using generator functions, the environment can be easily passed along with the generator function itself, which can be a clean and flexible solution. The
resume
function acts as a dispatcher, handling theyield
values returned by the generator function and calling the appropriate functions based on the type ofyield
.While this approach may require all functions to be written as generator functions, it does offer a clear and consistent way to handle the environment across different levels of function calls.
Jack: If thenables are to be dereferenced, an equivalent to
await
can be included among the cases handled byresume
.GPT: Yes, that's correct! If thenables are to be dereferenced, an equivalent to
await
can be included among the cases handled byresume
. Here's an example implementation:const resume = ctxt => injectedValue => { const {value, done} = iter.next(injectedValue); if (done) { /* is return */ ctxt.defer(ctxt.succeed, value); return }; /* else is yield */ switch (typeof value) { case 'undefined': /* is naked `(yield)` */ ctxt.defer(resume, ctxt); break; case 'object': /* is a `thenable` object */ value.then(resume(ctxt)).catch(ctxt.fail); break; case 'function': /* is a generator function */ ctxt.defer( resume, { gen: value(), env: ctxt.env, resume }); break; /* ... more cases */ } };
In the case where
value
is an object, we check if it's a "thenable" (i. e., has athen
method). If it is, we attach a.then
callback to the object to continue execution of the generator when the promise resolves. Note that we're also callingctxt.fail
if the promise rejects, to propagate the error.[Note from Jack: here, GPT has surmised that since I specified
.succeed
as one of the members of a context, the corresponding name, in the same English-language style, for failure or rejection, would befail
. This is a very human-like step of reasoning.]GPT: With this change, we can now use thenables in the generator functions:
function* getUser(userId) { const user = yield fetch(`/users/${userId}`).then(res => res.json()); const friendIds = user.friends; const friends = yield Promise.all(friendIds.map(id => fetch(`/users/${id}`).then(res => res.json()))); return { ...user, friends }; }
This code will fetch a user object from the server, get the list of friend IDs from the object, fetch each friend's object from the server, and return an object that combines the user and friends data. The
yield fetch(...)
and yieldPromise.all(...)
statements will be handled byresume
and will pause execution until the promises resolve.Jack: Oops, I wrote
iter.next
and I meantctxt.iter.next
. But you have renamediter
togen
, which is reasonable.GPT: Thank you for pointing that out. Yes, I renamed iter to gen in my last response to avoid any confusion with the built-in Iterator interface in JavaScript.
Jack: I have been testing similar techniques and I have been using
ctxt.iter
because generators in fact conform to iterator protocol, and the technique uses that by calling.next(injectedValue)
, andnext
is central to the requirements of conforming to iterator protocol. But maybegen
would make it clearer to other engineers if they try to read my code.GPT: Yes, using
ctxt.gen
instead ofctxt.iter
could make it clearer to other engineers that the generator function is being used as an iterator. However, it's ultimately up to your personal preference or the coding conventions of your team. As long as the code is clear and consistent, either choice should be fine.Jack: Another minor software-engineering decision in connection with the scheme is whether to keep
resume
as an independent procedure as I showed it, or whether to make it a method inherited by the context object. This would not be a change to the fundamentals of the technique, but could make the code easier or harder to read and reverse engineer.GPT: Whether to keep
resume
as an independent procedure or make it a method inherited by the context object is ultimately a matter of coding style and design preference. Both approaches have their advantages and disadvantages.Keeping
resume
as an independent procedure makes it clear that it is a standalone utility function that can be used in other parts of the code. This can make the code more modular and reusable. It also makes it easier to test and debugresume
separately from the rest of the code.On the other hand, making
resume
a method of the context object can make the code more object-oriented and easier to follow, especially if there are multiple contexts involved. It can also make the code more readable, since theresume
method would be a natural part of the context object and would not require an additional import or reference.Ultimately, the decision depends on the specific requirements of the project and the preferences of the development team.
Jack: Most client code would not have to refer to
resume
by importing it from somewhere. The code is in fact called byresume
and only needs to useyield
orreturn
(either of which does a synchronous return) in order to obtain the functionality ofresume
. Moreover, so long as a context object doesn't inherit, the code to create a new one is simpler.GPT: That is correct. Since the
resume
function is called by the generator function, client code does not have to import it. Client code can simply use theyield
statement to pause the execution of the generator function and pass control back to the caller (i. e.,resume
function). The caller can then resume the execution of the generator function by calling itsnext()
method.Also, as you pointed out, if a context object does not inherit, creating a new one is simpler because client code only needs to create an object with the necessary properties (e. g.,
env
,iter
,defer
, andsucceed
) and pass it to the generator function.Jack: Your example included (and I am very impressed you came up with this):
case 'object': /* is a `thenable` object */ value.then(resume(ctxt)).catch(ctxt.fail); break;
Here, you take advantage of the fact that I coded
resume
as Curried in two arguments. You apply it partially, only supplying the context argument, and leaving it to.then
to supply theinjectedVaue
argument. Your example is consistent with a deep understanding of the scheme as I described it. I think some of my human interlocutors might have been slow to realize the potential for using the partial application to make the code brief.If I were to change
resume
from an independent procedure to a method, the call might look like:case 'object': /* is a `thenable` object */ value.then(ctxt.resume).catch(ctxt.fail); break;
But this brief form would only work provided that every context has a
resume
bound to it. Ifresume
is inherited and not bound, it would be necessary to writecase 'object': /* is a `thenable` object */ value. then(result => ctxt.resume(result)).catch(ctxt.fail); break;
I guess that thinking about the decision in this context, of whether to use a method or an independent procedure, illustrates that often, when considering whether to use object-oriented programming or not, it is best to avoid OO when there is a straightforward solution that doesn't need it. I think it may be best to ask, when facing this sort of decision, where there is a strong reason to use OO techniques, e. g. a need for polymorphism.
GPT: Yes, you make a good point. There is not always a need to use object-oriented programming when a simpler solution is available. Object-oriented programming can be powerful and useful, but it can also introduce unnecessary complexity to a codebase.
In general, it's a good idea to consider the trade-offs between using object-oriented programming versus functional programming techniques. Sometimes, a hybrid approach can work well, where some parts of the code use object-oriented techniques and other parts use functional techniques. Ultimately, the choice of programming paradigm will depend on the specific needs of the project and the preferences of the development team.
-
This piece is also relevant to a quest to understand the fundamentals of intelligence, even when it may be manifest outside humans. What Plants Are Saying About Us
-
@rob:
By the way I suggest shareGPT chrome extension for sharing conversations (although it sucks that it doesn't give them a title).
What sucks much more, assuming I can believe the description, is that it only saves an image, not text out of which one could copy and paste snippets.
On the other hand, copying and pasting using the normal operators to do that in a browser does not preserve the formatting of GPT's part of the interaction. The backquotes around programming keywords are lost, for example.
What a pain.
-
@jack-waugh said in "The False Promise of ChatGPT" Chomsky, Roberts, Watumull:
What sucks much more, assuming I can believe the description, is that it only saves an image, not text out of which one could copy and paste snippets.
I posted a ton of them in another thread, and you can see it doesn't save an image, it saves the text: (maybe there are multiple extensions with the same name, mine is https://chrome.google.com/webstore/detail/sharegpt-share-your-chatg/daiacboceoaocpibfodeljbdfacokfjb ) Also, there's this: https://addons.mozilla.org/en-US/firefox/addon/mygpt-share/
https://sharegpt.com/c/sKWlL5C
https://sharegpt.com/c/2y32xym
https://sharegpt.com/c/qj6NXL3
https://sharegpt.com/c/O39EofJ
https://sharegpt.com/c/BP1V1Cf
https://sharegpt.com/c/p48TnHJ
https://sharegpt.com/c/kchAFh9
https://sharegpt.com/c/K4T9MJT
https://sharegpt.com/c/65nh64H
https://sharegpt.com/c/1fNcMpu
https://sharegpt.com/c/1fNcMpu
https://sharegpt.com/c/JXRewpr
https://sharegpt.com/c/ALyB7C6
https://sharegpt.com/c/UGGOYEN
@jack-waugh said in "The False Promise of ChatGPT" Chomsky, Roberts, Watumull:
The backquotes around programming keywords are lost, for example.
I'm not sure what you mean. I have now done a huge amount of coding with ChatGPT (well, given that I've only been using it for a few weeks), and I've never run into that sort of issue.
That said, there are a lot of things about ChatGPTs interface (as well as ShareGPT's) that are frustrating. (the worst of which is if it stops halfway through, and when you ask it to continue it messes up the formatting) But they are all easily fixable things and I don't expect to be long term issues with it. What ChatGPT does well, though, is an immensely difficult problem that they've done a staggeringly good job at, especially for a first release.
I skimmed through your chat with ChatGPT, it does appear you were more interested in getting it to agree with you than get it to provide you with something useful. Was anything you said intentionally incorrect? It seemed to be trying to be diplomatic, which it certainly does seem to see as a priority. There's nothing wrong with the sort of conversation you had, but I'd still suggest trying something to get it to do something useful.
From my observation, there is one thing you need to be able to get ChatGPT to be able to provide something useful: you have to want it to provide something useful. If you go in wishing for it to fail, it will fail. If you are excited about its potential, you'll find it. There are a ton of naysayers out there, all of which seem to be very obviously threatened by it.
-
@rob said in Way too many categories:
And while others here don't seem to have made this connection (yet?), to me the problem has gotten 100 times more urgent very recently, given that we're suddenly in an AI arms race, which a divided society is especially not ready for. It's sort of like nuclear weapons, except that generative AI spins gold right up until it destroys us all. And one nice thing about nuclear weapons is we can be pretty sure that the weapons themselves aren't going to decide on their own to wipe us out. Another nice thing about nuclear weapons is that the people who build them actually know how they work. (generative AI such as GPT-4 is essentially an enormous matrix of floating point numbers that no one on the planet truly understands why it works the way it does)
(hey I've been accused of being alarmist before. Usually I don't think I am. Here, yeah, I'm pretty freaking alarmist.)I've quoted from a different thread because I think this discussion is possibly more relevant here.
I wrote a paper a few years ago making an argument against the future existence of superintelligent AI, based on anthropic reasoning. You might not agree with the argument or think it's a very strong argument, but I thought I'd leave it with you anyway!