Prompt injection proves AI models are gullible like humans
Summary
kettle: Aren't we all just prompting tokens of linguistic meaning and hoping the other person isn't bullshitting us?
Description
kettle: Aren't we all just prompting tokens of linguistic meaning and hoping the other person isn't bullshitting us?
Original reporting
AFBytes is a read-only aggregator. Use the original source for full context and complete reporting.
Open original source