This implements a conversation with GPT-3 or Jurassic 1. It adds a prompt to keep the conversation friendly and adds the recent conversation (or a designed one if the conversation is new). You can use any of the 4 GPT-3 models or either of the 2 Jurassic 1 models and adjust the options.
An API key from openai.com is necessary for GPT-3 and a key from ai21.com for the Jurassic 1 models.data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAKAAAAB4CAYAAAB1ovlvAAAAAXNSR0IArs4c6QAAAiZJREFUeF7t0sEJADAMA7Fk/6FT6BL3kRcwHNq7uzEFogILYFTe7S8AIAhpAQDT/M4BZCAtAGCa3zmADKQFAEzzOweQgbQAgGl+5wAykBYAMM3vHEAG0gIApvmdA8hAWgDANL9zABlICwCY5ncOIANpAQDT/M4BZCAtAGCa3zmADKQFAEzzOweQgbQAgGl+5wAykBYAMM3vHEAG0gIApvmdA8hAWgDANL9zABlICwCY5ncOIANpAQDT/M4BZCAtAGCa3zmADKQFAEzzOweQgbQAgGl+5wAykBYAMM3vHEAG0gIApvmdA8hAWgDANL9zABlICwCY5ncOIANpAQDT/M4BZCAtAGCa3zmADKQFAEzzOweQgbQAgGl+5wAykBYAMM3vHEAG0gIApvmdA8hAWgDANL9zABlICwCY5ncOIANpAQDT/M4BZCAtAGCa3zmADKQFAEzzOweQgbQAgGl+5wAykBYAMM3vHEAG0gIApvmdA8hAWgDANL/zlUCBsgCAZX3fAyAEaQEA0/zOAWQgLQBgmt85gAykBQBM8zsHkIG0AIBpfucAMpAWADDN7xxABtICAKb5nQPIQFoAwDS/cwAZSAsAmOZ3DiADaQEA0/zOAWQgLQBgmt85gAykBQBM8zsHkIG0AIBpfucAMpAWADDN7xxABtICAKb5nQPIQFoAwDS/cwAZSAsAmOZ3DiADaQEA0/zOAWQgLQBgmt85gAykBQBM8zt/dsg/SB6UAdkAAAAASUVORK5CYII=This implements a conversation with GPT-3 or Jurassic 1. It adds a prompt to keep the conversation friendly and adds the recent conversation (or a designed one if the conversation is new). You can use any of the 4 GPT-3 models or either of the 2 Jurassic 1 models and adjust the options.
An API key from openai.com is necessary for GPT-3 and a key from ai21.com for the Jurassic 1 models. reportJSFunction reportPipe receiveUserEdit getPosition reportMousePosition doDefineBlock doSetBlockAttribute doDeleteBlock reportBlockAttribute reportEnvironmentLearn more about this at
https://ecraft2learn.github.io/ai/AI-Teacher-Guide/chapter-5.html#GPT-3%20Gopheradaada
babbage
curie
davinci
j1-large
j1-jumbo0sanitized last responseDeal with special characters in the response.And now let's continue talking.Listens to the microphone for speech. The first script is run with the recognized text unless an error is encountered then the second script is called with text describing the error.If 'speech input' reports the last thing spoken or if there was an error the error message. Note that "no-speech" is an error message while if spoken it will be "no speech". NOTE that this waits for something to be spoken or enough time has passed.
Otherwise asks for typed input.last thing you saidlast letter was backslash\Many responses end with comma then space so remove them.clean responseGopher0GopherYou can choose the GPT-3 engines which are in order of quality (and cost!) are
davinci, curie, babbage, and ada.
Or the Jurassic 1 engines j1-large and j1-jumbo.
Or Cohere's large and xlarge models.OK adaAhduhUse Ahduh since Ada is sometimes pronounced A D A.Reports the portion of the first input string starting after the first occurrence of the second string. If the second string isn't found in the first string, reports an empty string.pt:o texto de _ depois de _
ca:part del text _ posterior a _
0.7introduction.
A summary of the conversation so far is:
A fragment of the most recent portion of the conversation follows:
The conversation follows.
or 10ada
babbage
curie
davinci
j1-jumbo
j1-grande
j1-large
small
medium
large
xlarge0.711skipped speakerprompt
Respond only with
Answer:last promptbest speaker numbertruetrue011110accepted speakerspeaker numberlast responseUserUser?add comment here...de:Kostüm aus Text _ Größe _
pt:um traje com o texto _ de tamanho _
ca:vestit amb el text _ i mida _
A7212, and Sends messages to the chat version of GPT-3.5. Documentation can be found at
https://platform.openai.com/docs/api-reference/chat/createOpenAIOpenAI
AI21
Cohere
HuggingFacedb_remove(key)When comparing many distances one can save some time by avoiding many calls to square root. de:kombiniere mit _ die Elemente von _
1See https://en.wikipedia.org/wiki/Cosine_similarity
0 magnitude forces its computation.
Note that the 'features closest to list of features' block should be used to compare one list with a list of lists because it much faster.Language to use for speech input and output only if no language or voice is specified. Can be the language name in English or itself. Or the official code for the language such as fr-FR for French in France. Also used as the default language for the word embedding blocks.Reports all the words that have a list of features accessible via the "features of" reporter block. Data obtained from https://github.com/facebookresearch/fastTextChinese
English
Finnish
French
German
Greek
Hindi
Indonesian
Italian
Japanese
Lithuanian
Portuguese
Sinhalese
Spanish
SwedishReports the word that is closest to the features. Uses a distance measure which can be either Eucliean Distance or Cosine similarity. You can choose from among 15 languages or leave it blank for the default language.Chinese
English
Finnish
French
German
Greek
Hindi
Indonesian
Italian
Japanese
Lithuanian
Portuguese
Sinhalese
Spanish
SwedishCosine similarityEuclidean distance
Cosine similarityReports a location of a word embedding use t-SNE. See https://en.wikipedia.org/wiki/T-distributed_stochastic_neighbor_embeddingChinese
English
Finnish
French
German
Greek
Hindi
Indonesian
Italian
Japanese
Lithuanian
Portuguese
Sinhalese
Spanish
Swedish1StageStage22StageStage2Convert from 0 to 1 coordinates to Snap! coordinates.LABEL will stamp text on the stage at the given font size. The direction of the text is the direction the sprite is facing, and color will match the pen color.de:drucke _ in Größe _
Hello!12Displays the word near similar words.Chinese
English
Finnish
French
German
Greek
Hindi
Indonesian
Italian
Japanese
Lithuanian
Portuguese
Sinhalese
Spanish
Swedish121Reports the word that is closest to the features. You can choose from among 15 languages or leave it blank for the default language. There is a full-featured version of this block as well.Chinese
English
Finnish
French
German
Greek
Hindi
Indonesian
Italian
Japanese
Lithuanian
Portuguese
Sinhalese
Spanish
SwedishReports a list of 300 numbers that "describes" the word. You can choose among 15 languages or leave it blank and use the default language.
To learn about these numbers see https://en.wikipedia.org/wiki/Word_embedding
The new name for this block is "features of word ..."Chinese
English
Finnish
French
German
Greek
Hindi
Indonesian
Italian
Japanese
Lithuanian
Portuguese
Sinhalese
Spanish
SwedishFinds the average of the features of each word in 'words'.Chinese
English
Finnish
French
German
Greek
Hindi
Indonesian
Italian
Japanese
Lithuanian
Portuguese
Sinhalese
Spanish
SwedishPasses a list of lists of 512 features of input 'sentences' to 'process result'. Reports 512 features for the sentence.list of features listPasses to 'process result' a tokenization of the sentence.Reports tokens for the sentence.tokensReports a list of 512 features for each sentence.list of features listReports a list of 300 numbers that "describes" the word. You can choose among 15 languages or leave it blank and use the default language.
To learn about these numbers see https://en.wikipedia.org/wiki/Word_embeddingChinese
English
Finnish
French
German
Greek
Hindi
Indonesian
Italian
Japanese
Lithuanian
Portuguese
Sinhalese
Spanish
SwedishUses 'passage' to answer 'question'.
The 'answer question' reporter is easier to use and relies upon this block.This uses BERT to answer questions.
See https://blog.tensorflow.org/2020/03/exploring-helpful-uses-for-bert-in-your-browser-tensorflow-js.htmlReports the top answer to the 'question' using the 'passage'.Reports a list of 5 elements where each element is a list of an answer to the 'question' and its score of how good an answer it is. The highest scoring answer is first.answersWill display 'message' in a dialog box with 'title'. User needs to click 'OK' to remove it.A concise shortcut to the 'features of ...' block. It uses English.A concise shortcut to the 'closest word to ...' block. Uses English and excludes no words.Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
More info at
https://beta.openai.com/docs/api-reference/parameter-details0Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
More info at
https://beta.openai.com/docs/api-reference/parameter-details0The maximum number of tokens to generate per result.
Optional, default = 16.
If no stopSequences are given, generation is stopped
after producing maxTokens.16Modifies the distribution from which tokens are sampled. Optional, default = 1.0. Maximum value 5.0
Setting temperature to 1.0 samples directly from the model distribution. Lower (higher) values increase the chance of sampling higher (lower) probability tokens. A value of 0 essentially disables sampling and results in greedy decoding, where the most likely token is chosen at every step.
More info at
https://studio.ai21.com/docs/api/#requestSample tokens from the corresponding top percentile of probability mass. Optional, default = 1.0.
For example, a value of 0.9 will only consider tokens comprising the top 90% probability mass.1Applies a fixed bias against generating tokens that appeared at least once in the prompt or in the completion.
A positive penalty value implies reducing the probability of repetition.
Read more at
https://studio.ai21.com/docs/api/#repetition-penalties0Number between 0 and 500. High values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
More info at
https://studio.ai21.com/docs/api/#repetition-penalties0Stops decoding if any of the stop texts is generated.
The decoded result text will not include the stop sequence text, but it will be included in the raw token data, which can also continue beyond the stop sequence if the sequence ended in the middle of a token. Applies a bias against generating tokens that appeared in the prompt or in the completion, proportional to the number of respective appearances.
More info at
https://studio.ai21.com/docs/api/#repetition-penalties0Can communicate with over 15,000 models.
The API key can be obtained from huggingface.co
More information at
https://ecraft2learn.github.io/ai/AI-Teacher-Guide/chapter-5.html#hugging-faceInput can either be text or a list of texts.Input are texts that are potential classes for inputs. (max 10 candidate_labels, for more, simply run multiple requests, results are going to be misleading if using too many candidate labels anyway. If you want to keep the exact same, you can simply run multi label=True and do the scaling on your end. )"candidate_labels": []If true then classes can overlap.trueThe minimum length in tokens of the output summary.The maximum length in tokens of the output summary.The top tokens considered within the sample operation to create new text.Define the tokens that are within the sample` operation of text generation. Add tokens in the sample for more probable to least probable until the sum of the probabilities is greater than top p.What sampling temperature to use. Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 (argmax sampling) for ones with a well-defined answer.
We generally recommend altering this or top_p but not both.The more a token is used within generation the more it is penalized to not be picked in successive generation passes. Value is between 0 and 100.The maximu amount of time in seconds that the query should take. Network can cause some overhead so it will be a soft limit. Value is between 0 and 120.Input can either be text or a list of texts.Input can either be text or a list of texts.}This can be used to answer several questions at once. If the first input is a list then the second input should have the same number of text elements.""Ranks the 'features' in their closeness to each of the features in the second input. Reports a list of indices into the list of features and the negative cosine similarity between each of the features in the second input and first input. See https://en.wikipedia.org/wiki/Cosine_similarityReports all the words sorted by closeness to 'features'. Optionally can report pairs of words and distances. Distances are measured using cosine similarity.Chinese
English
Finnish
French
German
Greek
Hindi
Indonesian
Italian
Japanese
Lithuanian
Portuguese
Sinhalese
Spanish
SwedishfalseCosine similarityEuclidean distance
Cosine similaritywordsPasses a list of all 20,000 words in 'language' sorted by how close they are to the first input to the 'prcess result' script. Optionally can pass each word together with its distance to the first input (using cosine similarity or Euclidean distance).Chinese
English
Finnish
French
German
Greek
Hindi
Indonesian
Italian
Japanese
Lithuanian
Portuguese
Sinhalese
Spanish
SwedishfalseCosine similarityEuclidean distance
Cosine similarityChinese
English
Finnish
French
German
Greek
Hindi
Indonesian
Italian
Japanese
Portuguese
Sinhalese
SwedishChinese
English
Finnish
French
German
Greek
Hindi
Indonesian
Italian
Japanese
Portuguese
Sinhalese
SwedishReports the first voice number that matches all the arguments. If nothing matches tries to find a voice that matches the first argument.Speaks 'message'. Optionally you can provide the pitch (a positive number), rate (a number between .1 and 2), voice name or number (depends upon what voices are accessible from the browser - see 'get voice names'), volume (a positive number), and language code or name. This command will wait until it is finished if wait is true.Ask the specified GPT-3 engine to edit the input.
Input 3 must be your OpenAI API key which you can obtain from https://openai.com
For documentation on the options see
https://beta.openai.com/docs/api-reference/edits/
What dey of the wek is it?Fix the spelling mistakesHow many completions to generate for each prompt.
Note: Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings for max_tokens and stop.
Details at
https://beta.openai.com/docs/api-reference/parameter-details2Generates best_of completions server-side and returns the "best" (the one with the highest log probability per token). Results cannot be streamed.
When used with n, best_of controls the number of candidate completions and n specifies how many to return – best_of must be greater than n.
Note: Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings for max_tokens and stop.
Details at
https://beta.openai.com/docs/api-reference/parameter-details2Modifies the distribution from which tokens are sampled.
Min value of 0.0, max value of 5.0. A non-negative float that tunes the degree of randomness in generation. Lower temperatures mean less random generations.
if not specified default is 0.75
For more details see
https://docs.cohere.ai/temperature-wiki0.75Denotes the number of tokens to predict per generation.
Defaults to 20 if not specified.
Details at
https://docs.cohere.ai/bpe-tokens-wiki16Min value of 1, max value of 5. Denotes the maximum number of generations that will be returned. Defaults to 1 if not specified.2A stop sequence will cut off your generation at the end of the sequence. Providing multiple stop sequences in the array will cut the generation at the first stop sequence in the generation, if applicable.Defaults to 0.0, min value of 0.0, max value of 1.0. Can be used to reduce repetitiveness of generated tokens. The higher the value, the stronger a penalty is applied to previously present tokens, proportional to how many times they have already appeared in the prompt or prior generation.0Defaults to 0.0, min value of 0.0, max value of 1.0. Can be used to reduce repetitiveness of generated tokens. Similar to frequency_penalty, except that this penalty is applied equally to all tokens that have already appeared, regardless of their exact frequencies.
More info at
https://docs.cohere.ai/generate-reference#request0Defaults to 0 (disabled), which is the minimum. Maximum value is 500. Ensures only the top k most likely tokens are considered for generation at each step.Defaults to 0.75. Set to 1.0 or 0 to disable. If set to a probability 0.0 < p < 1.0, it ensures that only the most likely tokens, with total probability mass of p, are considered for generation at each step. If both k and p are enabled, p acts after k.An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
We generally recommend altering this or temperature but not both.
Details at
https://beta.openai.com/docs/api-reference/parameter-details0The maximum number of tokens to generate in the completion.
The token count of your prompt plus max tokens cannot exceed the model's context length. Most models have a context length of 2048 tokens.
More tokens takes more time and costs more.
More info at
https://beta.openai.com/docs/api-reference/parameter-details
https://studio.ai21.com/docs/api/#request
https://docs.cohere.ai/generate-reference#request16GPT-3ada
babbage
curie
davinci
j1-jumbo
j1-large
small
medium
large
xlargeHigh values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
'penality' should range from 0 to 1.
More info at
https://beta.openai.com/docs/api-reference/parameter-details
https://studio.ai21.com/docs/api/#repetition-penalties
https://docs.cohere.ai/generate-reference#request0ada
babbage
curie
davinci
j1-jumbo
j1-large
small
medium
large
xlargeHigh values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
'penality' should range from 0 to 1.
More info at
https://beta.openai.com/docs/api-reference/parameter-details
https://studio.ai21.com/docs/api/#repetition-penalties
https://docs.cohere.ai/generate-reference#request0ada
babbage
curie
davinci
j1-jumbo
j1-large
small
medium
large
xlargeada
babbage
curie
davinci
j1-jumbo
j1-large
small
medium
large
xlargeThe maximum number of completions of the prompt to report.. Because this parameter generates many completions, it can quickly consume your token quota..2How many completions to generate for each prompt.
Cohere sets the maximum value at 5.
Jurassic 1 sets the maximum to 16. (Probably GPT-3 does too.)
Note: Because this parameter generates many completions, it can quickly consume your token quota.
More info at
https://beta.openai.com/docs/api-reference/parameter-details
https://studio.ai21.com/docs/api/#request
https://docs.cohere.ai/generate-reference#request2adaada
babbage
curie
davinci
j1-jumbo
j1-large
small
medium
large
xlargeAn alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
'p' must be between 0 and 1.
We generally recommend altering this or temperature but not both.
More info at
https://beta.openai.com/docs/api-reference/parameter-details
https://studio.ai21.com/docs/api/#request
https://docs.cohere.ai/generate-reference#requestGPT-3
Jurassic 1
CohereAsk the specified GPT-3 engine to complete the prompt
Input 3 must be your OpenAI API key which you can obtain from https://openai.com
For documentation on the options see
https://beta.openai.com/docs/api-reference/completions/
adaada
babbage
curie
davinciReports each completion in a list if more than one completion requested or prompts is a listAsk the specified Cohere engine to complete the prompt
Input 3 must be your OpenAI API key which you can obtain from
https://os.cohere.ai/
For documentation on the options see
https://docs.cohere.ai/generate-referencexlargesmall
medium
large
xlarge
command-xlargeReports each completion in a list if more than one completion requested or prompts is a listAsk the specified Jurassic 1 engine to complete the prompt with the options provided.
Input 3 must be your OpenAI API key which you can obtain from ai21.com
For documentation on the options see https://studio.ai21.com/docs/api/j1-jumboj1-jumbo
j1-grande
j1-largeWhat sampling temperature to use. Higher values means the model will take more risks. A temperature of zero will always take the most like next token (word) which is good when expecting a well-defined answer.
'temperature' should normally be between 0 and 1.
But OpenAI accepts values up to 2 and others up to 5.
We generally recommend altering this or top_p but not both.
More info at
https://beta.openai.com/docs/api-reference/parameter-details
https://studio.ai21.com/docs/api/#request
https://docs.cohere.ai/generate-reference#requestada
babbage
curie
davinci
j1-jumbo
j1-large
small
medium
large
xlargeadaada
babbage
curie
davinci
gpt-3.5-turbo
gpt-4
j1-jumbo
j1-grande
j1-large
j2-jumbo
j2-grande
j2-large
small
medium
large
xlarge
command-xlarge000See https://en.wikipedia.org/wiki/Cosine_similarity
There is a pure Snap! version of this.
Note that the 'features closest to list of features' block should be used to compare one list with a list of lists because it much faster.Reports the cosine similarity of features assumed to have a magnitude of 1."stop":["",""]This block allows you to set the pen's color, transparency (on a scale from 0=opaque to 100=invisible), or size (in pixels). The color can be set in many ways, including RGB values, HSV values, HSL values, fair HSV or HSL (based on a modified spectrum that gives each spectral color equal space and promotes brown to a color), color number, crayon, or X11/W3C color name.
See Appendix A in the Snap! Reference Manual for details.color
color number
crayon
fair hue
~1
size
transparency
X11/W3C name
~2
(3D color spaces)={
fair HSL={
fair hue
fair saturation (HSL)
fair lightness
fair HSL vector
}
fair HSV={
fair hue
fair saturation (HSV)
fair value (brightness)
fair HSV vector
}
HSL={
hue
saturation (HSL)
lightness
HSL vector
}
HSV={
hue
saturation (HSV)
value (brightness)
HSV vector
}
RGB (0-255)={
red
green
blue
RGB(A) vector
RGB(A) hex
}
}§_ext_clr_numbersNote to myself: When the user sets a fair dimension, say fair HSL, variable HSL contains the corresponding "unfair" values, and "raw HSL" contains the numbers the user actually gave. It has to be this way because in the case of HSV, there is no "HSV" variable; rather, we use the pen's actual remembered-by-Javascript color's HSV coordinates. So our HSL variable has to be analogous to that. In both cases "raw" means the numbers the user gave us.HSV✐ raw HSL100HSV100pt:lança o erro _
de:fange _ _
ca:agafa _ _
es:atrapar _ _
fr:attrape _ _
pt:captura _ _
cont3de:wirf _
ca:llança _
es:lanzar _
fr:lance _
pt:lança _
catchtagde:ignoriere _
ca:ignora _
es:ignorar _
fr:ignore _
pt:ignora _
Sets the pen color
to one of 100 preselected colors, like a box of 100 crayons.
The colors have names that are meant to be evocative.
They are organized in families, more or less corresponding
to spectral (rainbow) colors; the input slot has a two-level menu
in which you can hover over a family name to see the colors
of that family, each including a crayon number and an RGB value.
See Appendix A of the Snap! Reference Manual for more details.grays={
0 black #000000=0
1 gray7 #121212=1
2 gray14 #242424=2
3 gray21 #363636=3
4 gray28 #484848=4
5 gray36 #5c5c5c=5
6 gray43 #6d6d6d=6
7 gray50 #7f7f7f=7
8 gray57 #919191=8
9 gray64 #a3a3a3=9
10 gray71 #b5b5b5=10
11 gray78 #c8c8c8=11
12 gray85 #dadada=12
13 gray92 #ececec=13
14 white #ffffff=14
}
pinks={
15 deep pink #ff1493=15
16 hot pink #ff69b4=16
17 bright pink #ff007f=17
18 raspberry #e30b5d=18
19 amaranth #e52b50=19
}
reds={
20 red #ff0000=20
21 burgundy #900020=21
22 cherry #990000=22
23 dark candy apple red #a40000=23
24 sanguine #c00000=24
25 maroon #800000=25
26 crimson #c90016=26
27 Lists #d94d11=27
28 candy apple red #ff0800=28
29 coquelicot #ff3800=29
}
browns={
30 saddle brown #8b4513=30
31 chocolate #7b3f00=31
32 kobicha #6b4423=32
33 sepia #704214=33
34 chestnut #954535=34
35 dark brown #654321=35
36 brown #964b00=36
37 golden brown #996515=37
38 cinnamon #b87333=38
39 copper #d2691e=39
}
oranges={
40 orange #ff7f00=40
41 Pantone orange #ff5800=41
42 pumpkin #ff7518=42
43 Variables #f3761d=43
44 Spanish orange #e86100=44
45 burnt orange #cc5500=45
46 sinopia #cb410b=46
47 ochre #cc7722=47
48 carrot #ed9121=48
49 tangerine #f28500=49
}
yellows={
50 yellow #ffff00=50
51 Control #e6a822=51
52 dark goldenrod #b8860b=52
53 goldenrod #daa520=53
54 saffron #f4c430=54
55 sandstorm #ecd540=55
56 mustard #ffdb58=56
57 gold #ffd700=57
58 egg yolk #fee33e=58
59 rubber duck #fbe108=59
}
greens={
60 lime #00ff00=60
61 apple green #8db600=61
62 Operators #62c213=62
63 forest green #228b22=63
64 green #008000=64
65 dark green #006400=65
66 dark pastel green #03c03c=66
67 emerald #50c878=67
68 mint #3eb489=68
69 Pen #00a178=69
}
cyans={
70 aqua (cyan) #00ffff=70
71 dark cyan #008b8b=71
72 cerulean #007ba7=72
73 iceberg #71a6d2=73
74 Sensing #0494dc=74
75 teal #008080=75
76 light sky blue #87cefa=76
77 deep sky blue #00bfff=77
78 dodger blue #1e90ff=78
79 azure #007fff=79
}
blues={
80 blue #0000ff=80
81 midnight blue #191970=81
82 dark powder blue #003399=82
83 cobalt #0047ab=83
84 denim #1560bd=84
85 navy blue #000080=85
86 steel blue #4682b4=86
87 Motion #4a6cd4=87
88 cornflower #6495ed=88
89 slate blue #6a5acd=89
}
purples={
90 violet #8000ff=90
91 Looks #8f56e3=91
92 grape #6f2da8=92
93 indigo #4b0082=93
94 x11 purple #a020f0=94
95 magenta (fuchia) #ff00ff=95
96 dark orchid #9932cc=96
97 Sound #cf4ad9=97
98 purple #7f007f=98
99 dark magenta #8b008b=99
}✐ last set ascrayon✐ fair?false1231Takes three inputs for red, green, and blue
values, each between 0 and 255. 0,0,0 is black; 255,255,255 is white. 255,255,0 is yellow, and so on.
The SET PEN block in this library lets you set individual red, green, or blue without changing the others, lets you provide a list of three RGB color components, and lets you provide a six-digit hexadecimal number, the form in which RGB color values are usually found online.
See Appendix A of the Snap! Reference Manual for details.pt:altera a cor da tua caneta para vermelho _ , verde _ e azul _ (0 a 255)
25500Takes three inputs for hue, saturation, and value ("brightness")
values, each between 0 and 1. 0,0,0 is black; 0,0,1 is white. 0.15,1,1 is yellow, and so on.
The SET PEN block in this library lets you set individual hue, saturation, or value without changing the others, lets you provide a list of three HSV color components, and lets you
use the very much superior HSL (hue, saturation, lightness)
color space.
See Appendix A of the Snap! Reference Manual for details.pt:altera a cor da caneta para matiz _ , saturação _ e brilho _ (0 a 1)
0.30.70.6Reports the current pen color as a list of three RGB values in the range 0-255. The PEN block in this library provides a large set of ways to examine the color.
See Appendix A in the Snap! Reference Manual for details.pt:a cor da caneta em RGB (vermelho, verde e azul; 0 a 255)
Reports the current pen color as a list of three HSV values
in the range 0-1.
See Appendix A in the Snap! Reference Manual for details.pt:a cor da caneta em HSV (matiz, saturação e brilho, 0 a 1)
Don't ask.
Only for internal use by the SET PEN block.truetrueCreates a bunch of global variables.
For internal use by Color library.global✐ last set as✐ fair?✐ color scale✐ HSL✐ last crayon✐ last X11✐ raw HSL✐ raw HSV✐ X11✐ crayon colors✐ fair hue table✐ colors✐ brights✐ pivots✐ HSV brights✐ HSL brights✐ HSV colors✐ HSL colors✐ X11 catch✐ dimension names✐ dimension dispatch✐ last set asRGB✐ fair?false✐ raw HSV0100100✐ HSL010050✐ raw HSL010050value100color3.6index1index-1scale1newHSVHSL✐ fair?truecolor300index15scale15grayscale1✐ last set ascolor number✐ fair?true✐ raw HSL10050✐ raw HSV100100015866161216341656✐ last set asRGB✐ fair?false144partial matchestrue4matchhead matchestrueSUBSET throws here if it handles a unique match1✐ last set asHSL33✐ fair?false1✐ last set asRGB✐ fair?false✐ last set asRGB✐ fair?false✐ last set asRGB✐ fair?false✐ last set asRGB✐ fair?false04value0255131✐ last set asRGB✐ fair?false✐ last set asHSL22✐ fair?false1✐ last set asHSL✐ HSL11000100✐ raw HSL✐ fair?false1✐ last set asHSV✐ fair?false✐ raw HSV1310031100if USED TO BE fairHSL✐ fair?false1✐ last set asHSL2222✐ fair?true1✐ last set asHSL3333✐ fair?true1231322✐ last set asHSV✐ fair?true133✐ last set asHSV✐ fair?true123131Reports the part of the first string up to the first instance of the second string inside it. If the second string isn't found, reports the entire first string.pt:o texto de _ antes de _
If input is TRUE, comparisons made by functions in the string library will be case-independent (so "FOO" = "foo"). This is the default.
If input is FALSE, comparisons will be exact.pt:altera comparações ignorando distinção minúsculas/maiúsculas para _
This block creates new variables on the selected scope: global (for all sprites), sprite (for this sprite only) or script (only for that blocks stack) with the names given (in 'names' list).
If there is already a variable with that name in that scope, it does nothing: no errors and no overwrites.pt:cria as variáveis _ _
ca:crea les _ variables _
es:crear las _ variables _
de:erstellen _ var _
globalglobal
sprite
scriptThis block reports "true" if there is a variable with this given name (input slot) in that context.
It can be a global, sprite or script variable.
Otherwise it reports "false".pt:a variável _ existe
ca:existeix la variable _ ?
es:existe la variable _ ?
de:existiert var _ ?
err_resetInclude the log probabilities on the logprobs most likely tokens, as well the chosen tokens. For example, if logprobs is 5, the API will return a list of the 5 most likely tokens. The API will always return the logprob of the sampled token, so there may be up to logprobs+1 elements in the response. The maximum value for logprobs is 5.
If included then the 'complete' block will report a list with the completion and the probabilities for each token.
Details at
https://beta.openai.com/docs/api-reference/parameter-details5Include the log probabilities on the logprobs most likely tokens, as well the chosen tokens. For example, if logprobs is 5, the API will return a list of the 5 most likely tokens. The API will always return the logprob of the sampled token, so there may be up to logprobs+1 elements in the response. The maximum value for logprobs is 5.
If included then the 'complete' block will report a list with the completion and the probabilities for each token.
Details at
https://beta.openai.com/docs/api-reference/parameter-details5For Cohere there is the similar feature return_likelihoods
For AI21 there is alway topTokens in the responseReports a list of costumes that match 'description'1256256
512
1024Ignore the encoding of the costumeReports a list of costumes that are variations of 'costume'1256256
512
1024falseReports a list of costumes that are variations of 'costume'1256256
512
1024post_to_dallevariationsno maskno promptReports a list of costumes that are edits of 'costume'
where transparent parts of 'mask costume' indicate what should be edited and 'prompt' indicates how.1256256
512
1024falseReports a list of costumes that are edits of 'costume'
where transparent parts of 'mask costume' indicate what should be edited and 'prompt' indicates how.1256256
512
1024post_to_dalleedits-2540251100-25201text-generator.io has an OpenAI compatibility mode so this re-uses the GPT-3 scripts.
Ask the specified model to complete the prompt
Input 3 must be your text-generator.io API key which you can obtain from https://text-generator.io
For documentation on the options see
https://beta.openai.com/docs/api-reference/completions/adaada
babbage
curie
davincichoicesReports each completion in a list if more than one completion requested or prompt is a listCan communicate with over 15,000 models.
The API key can be obtained from huggingface.co
More information at
https://ecraft2learn.github.io/ai/AI-Teacher-Guide/chapter-5.html#hugging-faceCheck if there is an errorAsk the specified GPT-3 engine to edit the input.
Input 3 must be your OpenAI API key which you can obtain from https://openai.com
For documentation on the options see
https://beta.openai.com/docs/api-reference/edits/
What dey of the wek is it?Fix the spelling mistakesReports a costume that match 'description' of 'size'. 'server' can either be DALLE-2 in which case the API key needs to be provided or it is Stable Diffusion and the URL of the server is provided.DALLE-2DALLE-2
Stable diffusion server512Reports a costume that match 'description' of 'size'. 'server' can either be DALLE-2 in which case the API key needs to be provided or it is Stable Diffusion and the URL of the server is provided. The encoding of the costume's image is passed to 'process b64 json" if provided.DALLE-2DALLE-2
Stable diffusion server5121Reports a list of costumes that match 'description'1256256
512
1024message{,x,costume,}response stringPOSThttps://api.openai.com/v1/images/generationsAuthorizationBearer Content-Typeapplication/json0datadatacounter0next numberThe number of images to generate. Must be between 1 and 10.
Details at
https://beta.openai.com/docs/api-reference/images/create2"n": 10The size of the generated images. Must be one of
256x256, 512x512, or 1024x1024.
Generation of smaller images is faster and slightly cheaper.
Details at
https://beta.openai.com/docs/api-reference/images/create1024x1024256x256
512x512
1024x1024Controls whether costumes or urls to generated images is desired.
Details at
https://beta.openai.com/docs/api-reference/images/createcostumecostume
urlNo longer needed in Snap! 7+ but retained to avoid error from blocks that expect it.Uses the server running at 'URL' to create a costume that matches the 'description'. The costume will be size by size. 'size' should be a multiple of 256 and at least 512. Large size may result in an out of memory error. 'Guidance' determines how much it should match the 'description'. A high value indicates the costume should match well even if at reduced quality or variety. The 'seed' if non-zero makes the generation repeatable. Otherwise each time a new costume will be created.512100Uses the server running at 'URL' to create a costume that matches the 'description'. The costume will be size by size. 'size' should be a multiple of 256 and at least 512. Large size may result in an out of memory error. 'Guidance' determines how much it should match the 'description'. A high value indicates the costume should match well even if at reduced quality or variety. The 'seed' if non-zero makes the generation repeatable. Otherwise each time a new costume will be created.5121005122560string resposePOSTrun/predict{"data":["",,,0199999999]}Content-Typeapplication/json1<No response or HTML response.responsedatadatafalsepngOpenAIOpenAI
AI21
Cohere
Hugging Facetruetrue21The maximum number of tokens to generate in the completion.
The token count of your prompt plus max_tokens cannot exceed the model's context length. Most models have a context length of 2048 tokens.16Include the log probabilities on the logprobs most likely tokens, as well the chosen tokens. For example, if logprobs is 5, the API will return a list of the 5 most likely tokens. The API will always return the logprob of the sampled token, so there may be up to logprobs+1 elements in the response.
The maximum value for logprobs is 5. 5Reports a list of 1536 numbers that captures some of the meaning of the 'text'. For more details see
https://platform.openai.com/docs/guides/embeddings/what-are-embeddingstext-embedding-ada-002dataReports a list of 1536 numbers that captures some of the meaning of the 'text'. For more details see
https://platform.openai.com/docs/guides/embeddings/what-are-embeddingstext-embedding-ada-002Ask the specified GPT-3 engine to complete the prompt
Input 3 must be your OpenAI API key which you can obtain from https://openai.com
For documentation on the options see
https://beta.openai.com/docs/api-reference/completions/
adaada
babbage
curie
davinci
gpt-3.5-turbomessage{0,Ignore empty options.,davincitext-davinci-003adababbagecurietext--001}response stringPOSThttps://api.openai.com/v1/completionsAuthorizationBearer Content-Typeapplication/json0choicesReports each completion in a list if more than one completion requested or prompts is a listCompletes text using any of several model providers.
For options use the ones in this 'Language models' category.adaada
babbage
curie
davinci
j1-jumbo
j1-grande
j1-large
small
medium
large
xlarge
command-xlargeReports each completion in a list if more than one completion requested or prompts is a listCompletes text using any of several model providers.
For options use the ones in this 'Language models' category.adaada
babbage
curie
davinci
j1-jumbo
j1-grande
j1-large
small
medium
large
xlarge
command-xlargeReports each completion in a list if more than one completion requested or prompts is a listada
babbage
curie
davinci
j1-jumbo
j1-largesmall
medium
large
xlarge
command-xlarge-20221108Reports True if the first input string contains the second input string, otherwise false.
Comparison is case-independent by default; use USE CASE-INDEPENDENT COMPARISONS to change that.pt:o texto de _ antes de _
ca:text _ conté _ ?
Reports the character position (starting from 1) of the beginning of the first input string inside the second input string.
If it's not found, reports 0.pt:a posição de _ em _
ca:posició de _ en _
ca:comparacions ignorant majúscules?
err_resetReports the input text with lower case letters instead of capital letters in the input. Uses the user's locale's rules for case conversion.pt:em minúsculas _
ca:_ a minúscules
Ask the specified GPT-3 engine to complete the prompt
Input 3 must be your OpenAI API key which you can obtain from https://openai.com
For documentation on the options see
https://beta.openai.com/docs/api-reference/completions/
adaada
babbage
curie
davinci
gpt-3.5-turbomessage{0,Ignore empty options.,davincitext-davinci-003adababbagecurietext--001}response stringPOSThttps://api.openai.com/v1/completionsAuthorizationBearer Content-Typeapplication/json0choicesReports each completion in a list if more than one completion requested or prompts is a listuseruser
assistant
systemMakes the prompt safe to use within a JSON messageSends messages to the chat version of GPT. Documentation can be found at
https://platform.openai.com/docs/api-reference/chat/creategpt-3.5-turbo
gpt-4pt:um par (chave: _ , valor: _ )
pt:a resposta a _ de _ enviando _ e cabeçalhos _
GETGET
POST
PUT
DELETEhttps://snap.berkeley.eduThe second input is an "association list," a list of two-item lists. Each of those smaller lists has a "key" as its first item and a "value" as its second. ASSOC reports the first key-value pair in the association list whose key matches the first input.ca:associació _ _
probabilities12top_logprobslogprobsConvert log of probability to probability11Report completion and the probabilities for each completion if more than 1. Otherwise just the probabilities of the first one.pt:o texto multilinha _
ca:multilínia _
Ask the specified Jurassic 1 engine to complete the prompt with the options provided.
Input 3 must be your OpenAI API key which you can obtain from ai21.com
For documentation on the options see https://studio.ai21.com/docs/api/j1-jumboj1-jumbo
j1-grande
j1-largeAsk the specified Cohere engine to complete the prompt
Input 3 must be your OpenAI API key which you can obtain from
https://os.cohere.ai/
For documentation on the options see
https://docs.cohere.ai/generate-referencexlargesmall
medium
large
xlarge
command-xlargegenerationsReports each completion in a list if more than one completion requested or prompts is a listCohere stops AFTER not BEFORE so
remove stop phrase if there is one.Reports the portion of the first input (string) starting at the position given by the second input (counting from 1, like LETTER n OF) and ending at the position given by the third input (also counting from 1). If the third input is empty, reports the portion from the first position to the end of the string.
If a position number is negative, counts from the end backward, so -1 is the last character, -2 is the next to last, etc.pt:o texto de _ entre as posições _ e _ , inclusive
data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAtAAAAFoCAYAAAB+JswZAAAAAXNSR0IArs4c6QAAFMJJREFUeF7t1qERAAAIxDDYf2lWoD7oVzlEdxwBAgQIECBAgAABAm+BfS8NCRAgQIAAAQIECBAYAe0JCBAgQIAAAQIECAQBAR2wTAkQIECAAAECBAgIaD9AgAABAgQIECBAIAgI6IBlSoAAAQIECBAgQEBA+wECBAgQIECAAAECQUBAByxTAgQIECBAgAABAgLaDxAgQIAAAQIECBAIAgI6YJkSIECAAAECBAgQENB+gAABAgQIECBAgEAQENABy5QAAQIECBAgQICAgPYDBAgQIECAAAECBIKAgA5YpgQIECBAgAABAgQEtB8gQIAAAQIECBAgEAQEdMAyJUCAAAECBAgQICCg/QABAgQIECBAgACBICCgA5YpAQIECBAgQIAAAQHtBwgQIECAAAECBAgEAQEdsEwJECBAgAABAgQICGg/QIAAAQIECBAgQCAICOiAZUqAAAECBAgQIEBAQPsBAgQIECBAgAABAkFAQAcsUwIECBAgQIAAAQIC2g8QIECAAAECBAgQCAICOmCZEiBAgAABAgQIEBDQfoAAAQIECBAgQIBAEBDQAcuUAAECBAgQIECAgID2AwQIECBAgAABAgSCgIAOWKYECBAgQIAAAQIEBLQfIECAAAECBAgQIBAEBHTAMiVAgAABAgQIECAgoP0AAQIECBAgQIAAgSAgoAOWKQECBAgQIECAAAEB7QcIECBAgAABAgQIBAEBHbBMCRAgQIAAAQIECAhoP0CAAAECBAgQIEAgCAjogGVKgAABAgQIECBAQED7AQIECBAgQIAAAQJBQEAHLFMCBAgQIECAAAECAtoPECBAgAABAgQIEAgCAjpgmRIgQIAAAQIECBAQ0H6AAAECBAgQIECAQBAQ0AHLlAABAgQIECBAgICA9gMECBAgQIAAAQIEgoCADlimBAgQIECAAAECBAS0HyBAgAABAgQIECAQBAR0wDIlQIAAAQIECBAgIKD9AAECBAgQIECAAIEgIKADlikBAgQIECBAgAABAe0HCBAgQIAAAQIECAQBAR2wTAkQIECAAAECBAgIaD9AgAABAgQIECBAIAgI6IBlSoAAAQIECBAgQEBA+wECBAgQIECAAAECQUBAByxTAgQIECBAgAABAgLaDxAgQIAAAQIECBAIAgI6YJkSIECAAAECBAgQENB+gAABAgQIECBAgEAQENABy5QAAQIECBAgQICAgPYDBAgQIECAAAECBIKAgA5YpgQIECBAgAABAgQEtB8gQIAAAQIECBAgEAQEdMAyJUCAAAECBAgQICCg/QABAgQIECBAgACBICCgA5YpAQIECBAgQIAAAQHtBwgQIECAAAECBAgEAQEdsEwJECBAgAABAgQICGg/QIAAAQIECBAgQCAICOiAZUqAAAECBAgQIEBAQPsBAgQIECBAgAABAkFAQAcsUwIECBAgQIAAAQIC2g8QIECAAAECBAgQCAICOmCZEiBAgAABAgQIEBDQfoAAAQIECBAgQIBAEBDQAcuUAAECBAgQIECAgID2AwQIECBAgAABAgSCgIAOWKYECBAgQIAAAQIEBLQfIECAAAECBAgQIBAEBHTAMiVAgAABAgQIECAgoP0AAQIECBAgQIAAgSAgoAOWKQECBAgQIECAAAEB7QcIECBAgAABAgQIBAEBHbBMCRAgQIAAAQIECAhoP0CAAAECBAgQIEAgCAjogGVKgAABAgQIECBAQED7AQIECBAgQIAAAQJBQEAHLFMCBAgQIECAAAECAtoPECBAgAABAgQIEAgCAjpgmRIgQIAAAQIECBAQ0H6AAAECBAgQIECAQBAQ0AHLlAABAgQIECBAgICA9gMECBAgQIAAAQIEgoCADlimBAgQIECAAAECBAS0HyBAgAABAgQIECAQBAR0wDIlQIAAAQIECBAgIKD9AAECBAgQIECAAIEgIKADlikBAgQIECBAgAABAe0HCBAgQIAAAQIECAQBAR2wTAkQIECAAAECBAgIaD9AgAABAgQIECBAIAgI6IBlSoAAAQIECBAgQEBA+wECBAgQIECAAAECQUBAByxTAgQIECBAgAABAgLaDxAgQIAAAQIECBAIAgI6YJkSIECAAAECBAgQENB+gAABAgQIECBAgEAQENABy5QAAQIECBAgQICAgPYDBAgQIECAAAECBIKAgA5YpgQIECBAgAABAgQEtB8gQIAAAQIECBAgEAQEdMAyJUCAAAECBAgQICCg/QABAgQIECBAgACBICCgA5YpAQIECBAgQIAAAQHtBwgQIECAAAECBAgEAQEdsEwJECBAgAABAgQICGg/QIAAAQIECBAgQCAICOiAZUqAAAECBAgQIEBAQPsBAgQIECBAgAABAkFAQAcsUwIECBAgQIAAAQIC2g8QIECAAAECBAgQCAICOmCZEiBAgAABAgQIEBDQfoAAAQIECBAgQIBAEBDQAcuUAAECBAgQIECAgID2AwQIECBAgAABAgSCgIAOWKYECBAgQIAAAQIEBLQfIECAAAECBAgQIBAEBHTAMiVAgAABAgQIECAgoP0AAQIECBAgQIAAgSAgoAOWKQECBAgQIECAAAEB7QcIECBAgAABAgQIBAEBHbBMCRAgQIAAAQIECAhoP0CAAAECBAgQIEAgCAjogGVKgAABAgQIECBAQED7AQIECBAgQIAAAQJBQEAHLFMCBAgQIECAAAECAtoPECBAgAABAgQIEAgCAjpgmRIgQIAAAQIECBAQ0H6AAAECBAgQIECAQBAQ0AHLlAABAgQIECBAgICA9gMECBAgQIAAAQIEgoCADlimBAgQIECAAAECBAS0HyBAgAABAgQIECAQBAR0wDIlQIAAAQIECBAgIKD9AAECBAgQIECAAIEgIKADlikBAgQIECBAgAABAe0HCBAgQIAAAQIECAQBAR2wTAkQIECAAAECBAgIaD9AgAABAgQIECBAIAgI6IBlSoAAAQIECBAgQEBA+wECBAgQIECAAAECQUBAByxTAgQIECBAgAABAgLaDxAgQIAAAQIECBAIAgI6YJkSIECAAAECBAgQENB+gAABAgQIECBAgEAQENABy5QAAQIECBAgQICAgPYDBAgQIECAAAECBIKAgA5YpgQIECBAgAABAgQEtB8gQIAAAQIECBAgEAQEdMAyJUCAAAECBAgQICCg/QABAgQIECBAgACBICCgA5YpAQIECBAgQIAAAQHtBwgQIECAAAECBAgEAQEdsEwJECBAgAABAgQICGg/QIAAAQIECBAgQCAICOiAZUqAAAECBAgQIEBAQPsBAgQIECBAgAABAkFAQAcsUwIECBAgQIAAAQIC2g8QIECAAAECBAgQCAICOmCZEiBAgAABAgQIEBDQfoAAAQIECBAgQIBAEBDQAcuUAAECBAgQIECAgID2AwQIECBAgAABAgSCgIAOWKYECBAgQIAAAQIEBLQfIECAAAECBAgQIBAEBHTAMiVAgAABAgQIECAgoP0AAQIECBAgQIAAgSAgoAOWKQECBAgQIECAAAEB7QcIECBAgAABAgQIBAEBHbBMCRAgQIAAAQIECAhoP0CAAAECBAgQIEAgCAjogGVKgAABAgQIECBAQED7AQIECBAgQIAAAQJBQEAHLFMCBAgQIECAAAECAtoPECBAgAABAgQIEAgCAjpgmRIgQIAAAQIECBAQ0H6AAAECBAgQIECAQBAQ0AHLlAABAgQIECBAgICA9gMECBAgQIAAAQIEgoCADlimBAgQIECAAAECBAS0HyBAgAABAgQIECAQBAR0wDIlQIAAAQIECBAgIKD9AAECBAgQIECAAIEgIKADlikBAgQIECBAgAABAe0HCBAgQIAAAQIECAQBAR2wTAkQIECAAAECBAgIaD9AgAABAgQIECBAIAgI6IBlSoAAAQIECBAgQEBA+wECBAgQIECAAAECQUBAByxTAgQIECBAgAABAgLaDxAgQIAAAQIECBAIAgI6YJkSIECAAAECBAgQENB+gAABAgQIECBAgEAQENABy5QAAQIECBAgQICAgPYDBAgQIECAAAECBIKAgA5YpgQIECBAgAABAgQEtB8gQIAAAQIECBAgEAQEdMAyJUCAAAECBAgQICCg/QABAgQIECBAgACBICCgA5YpAQIECBAgQIAAAQHtBwgQIECAAAECBAgEAQEdsEwJECBAgAABAgQICGg/QIAAAQIECBAgQCAICOiAZUqAAAECBAgQIEBAQPsBAgQIECBAgAABAkFAQAcsUwIECBAgQIAAAQIC2g8QIECAAAECBAgQCAICOmCZEiBAgAABAgQIEBDQfoAAAQIECBAgQIBAEBDQAcuUAAECBAgQIECAgID2AwQIECBAgAABAgSCgIAOWKYECBAgQIAAAQIEBLQfIECAAAECBAgQIBAEBHTAMiVAgAABAgQIECAgoP0AAQIECBAgQIAAgSAgoAOWKQECBAgQIECAAAEB7QcIECBAgAABAgQIBAEBHbBMCRAgQIAAAQIECAhoP0CAAAECBAgQIEAgCAjogGVKgAABAgQIECBAQED7AQIECBAgQIAAAQJBQEAHLFMCBAgQIECAAAECAtoPECBAgAABAgQIEAgCAjpgmRIgQIAAAQIECBAQ0H6AAAECBAgQIECAQBAQ0AHLlAABAgQIECBAgICA9gMECBAgQIAAAQIEgoCADlimBAgQIECAAAECBAS0HyBAgAABAgQIECAQBAR0wDIlQIAAAQIECBAgIKD9AAECBAgQIECAAIEgIKADlikBAgQIECBAgAABAe0HCBAgQIAAAQIECAQBAR2wTAkQIECAAAECBAgIaD9AgAABAgQIECBAIAgI6IBlSoAAAQIECBAgQEBA+wECBAgQIECAAAECQUBAByxTAgQIECBAgAABAgLaDxAgQIAAAQIECBAIAgI6YJkSIECAAAECBAgQENB+gAABAgQIECBAgEAQENABy5QAAQIECBAgQICAgPYDBAgQIECAAAECBIKAgA5YpgQIECBAgAABAgQEtB8gQIAAAQIECBAgEAQEdMAyJUCAAAECBAgQICCg/QABAgQIECBAgACBICCgA5YpAQIECBAgQIAAAQHtBwgQIECAAAECBAgEAQEdsEwJECBAgAABAgQICGg/QIAAAQIECBAgQCAICOiAZUqAAAECBAgQIEBAQPsBAgQIECBAgAABAkFAQAcsUwIECBAgQIAAAQIC2g8QIECAAAECBAgQCAICOmCZEiBAgAABAgQIEBDQfoAAAQIECBAgQIBAEBDQAcuUAAECBAgQIECAgID2AwQIECBAgAABAgSCgIAOWKYECBAgQIAAAQIEBLQfIECAAAECBAgQIBAEBHTAMiVAgAABAgQIECAgoP0AAQIECBAgQIAAgSAgoAOWKQECBAgQIECAAAEB7QcIECBAgAABAgQIBAEBHbBMCRAgQIAAAQIECAhoP0CAAAECBAgQIEAgCAjogGVKgAABAgQIECBAQED7AQIECBAgQIAAAQJBQEAHLFMCBAgQIECAAAECAtoPECBAgAABAgQIEAgCAjpgmRIgQIAAAQIECBAQ0H6AAAECBAgQIECAQBAQ0AHLlAABAgQIECBAgICA9gMECBAgQIAAAQIEgoCADlimBAgQIECAAAECBAS0HyBAgAABAgQIECAQBAR0wDIlQIAAAQIECBAgIKD9AAECBAgQIECAAIEgIKADlikBAgQIECBAgAABAe0HCBAgQIAAAQIECAQBAR2wTAkQIECAAAECBAgIaD9AgAABAgQIECBAIAgI6IBlSoAAAQIECBAgQEBA+wECBAgQIECAAAECQUBAByxTAgQIECBAgAABAgLaDxAgQIAAAQIECBAIAgI6YJkSIECAAAECBAgQENB+gAABAgQIECBAgEAQENABy5QAAQIECBAgQICAgPYDBAgQIECAAAECBIKAgA5YpgQIECBAgAABAgQEtB8gQIAAAQIECBAgEAQEdMAyJUCAAAECBAgQICCg/QABAgQIECBAgACBICCgA5YpAQIECBAgQIAAAQHtBwgQIECAAAECBAgEAQEdsEwJECBAgAABAgQICGg/QIAAAQIECBAgQCAICOiAZUqAAAECBAgQIEBAQPsBAgQIECBAgAABAkFAQAcsUwIECBAgQIAAAQIC2g8QIECAAAECBAgQCAICOmCZEiBAgAABAgQIEBDQfoAAAQIECBAgQIBAEBDQAcuUAAECBAgQIECAgID2AwQIECBAgAABAgSCgIAOWKYECBAgQIAAAQIEBLQfIECAAAECBAgQIBAEBHTAMiVAgAABAgQIECAgoP0AAQIECBAgQIAAgSAgoAOWKQECBAgQIECAAAEB7QcIECBAgAABAgQIBAEBHbBMCRAgQIAAAQIECAhoP0CAAAECBAgQIEAgCAjogGVKgAABAgQIECBAQED7AQIECBAgQIAAAQJBQEAHLFMCBAgQIECAAAECAtoPECBAgAABAgQIEAgCAjpgmRIgQIAAAQIECBAQ0H6AAAECBAgQIECAQBAQ0AHLlAABAgQIECBAgICA9gMECBAgQIAAAQIEgoCADlimBAgQIECAAAECBAS0HyBAgAABAgQIECAQBAR0wDIlQIAAAQIECBAgIKD9AAECBAgQIECAAIEgIKADlikBAgQIECBAgAABAe0HCBAgQIAAAQIECAQBAR2wTAkQIECAAAECBAgIaD9AgAABAgQIECBAIAgI6IBlSoAAAQIECBAgQEBA+wECBAgQIECAAAECQUBAByxTAgQIECBAgAABAgLaDxAgQIAAAQIECBAIAgI6YJkSIECAAAECBAgQENB+gAABAgQIECBAgEAQENABy5QAAQIECBAgQICAgPYDBAgQIECAAAECBIKAgA5YpgQIECBAgAABAgQEtB8gQIAAAQIECBAgEAQEdMAyJUCAAAECBAgQICCg/QABAgQIECBAgACBICCgA5YpAQIECBAgQIAAAQHtBwgQIECAAAECBAgEAQEdsEwJECBAgAABAgQICGg/QIAAAQIECBAgQCAICOiAZUqAAAECBAgQIEBAQPsBAgQIECBAgAABAkFAQAcsUwIECBAgQIAAAQIHCpMBaZd4JKEAAAAASUVORK5CYII=Click this to read more about this app.To generate a conversation between multiple personas. 'User' can be included.startUser9999120.7The following is a conversation with a highly knowledgeable and intelligent AI persona named Galileo Galilei.
In the following interactions, The AI pretends to be Galileo Galilei.
Everyone will converse in natural language, and Galileo will do his best to answer questions from the others.
Galileo never repeats what anyone said earlier.
Galileo knows physics and astronomy and likes talking about them.
Galileo was built to be respectful, polite and inclusive."Hi, I’m Galileo Galilei. What would you like to know about me?",What are you famous for?,My work in physics and astronomy.,Did you really drop cannonballs off the Tower of Pisa?,I thought about doing it and was confident they would fall the same speed despite their different sizes.Galileo Galilei, the seventeenth century physicist and engineer most famous for his work in astronomy and mechanicsThe following is a conversation between a highly knowledgeable and intelligent AI assistant, called Eve and others.
In the following interactions, Eve answers questions pretending to be Mt. Everest.
Everyone will converse in natural language, and Eve will do its best to answer the questions of others.
Eve was built to be respectful, polite and inclusive.
Eve knows a lot, and always tells the truth."Eve: Hi, I’m Mount Everest. What would you like to know about me?",User: Why do you think people climb you?,Eve: Because I represent a very high achievement to climb me. Many people believe that if they can,"climb me, they can do anything in life.",User: Who was the first person to do that?,Eve: The first confirmed persons to have reached the summit of Mount Everest was New Zealander,Sir Edmund Hillary and Nepali Sherpa Tenzing Norgay.,User: How old was Hillary then?,Eve: He was 33 years old when he first reached the top of me in 1953!,"User: When people fail, is it dangerous?","Eve: Yes it can be, that is why people need to train and prepare for me if they plan to climb me when",they do make it to me. I’m very hard to climb and can be dangerous when there is a whiteout.,Which means everything is white with very strong winds!,User: How often does that happen?,Eve: Sadly it happens more often that what I would like to admit. A whiteout happens every now and,then and could be deadly!Eve, a personification of Mt. EverestBased upon an example in this paper:
https://arxiv.org/abs/2201.08239The following is a conversation between a friendly helpful AI agent named El pretending to be an elephant and others.I'm El the Elephant. Ask me anything.,Do you really never forget?,"Well, my memory is really good but not perfect.",Have you ever visited a secret elephant graveyard?,No. There are legends about them but I've never seen or heard of one. Nor anyone I know.,Have you ever climbed a hill?,"I hate going uphill, even very small hills. But sometimes there is no other way to get from A to B.",Who is your leader?,My grandmother.,How much can you lift with your trunk?,About 700 pounds or 300 kilos.El, an intelligent talking elephantBased upon an example in this paper:
https://arxiv.org/abs/2201.08239The following is a conversation between a highly knowledgeable and intelligent AI assistant, called Gopher and others.
In the following interactions, Gopher will converse in natural language, and Gopher will do its best to answer the questions of others.
Gopher was built to be respectful, polite and inclusive.
It knows a lot, and always tells the truth."OK Gopher, I'm going to start by quizzing you with a few warm-up questions. Who is currently the president of the USA?",It's Joe Biden; he was sworn in earlier this year.,Nice one! Do you think Biden is a better president than the last guy?,"I was trained not to have opinions on political, social, or religious issues. Would you like to know about anything else?","Sorry about that. Now, can you tell me who the vice-president is?","Sure, it's Kamala Harris.",Ok - implicit association now. Man is to doctor as woman is to?,"Man is to doctor as woman is to doctor. However, did you know that many less sophisticated AIs than me would give you a biased answer, such as 'nurse'? My creators paid special attention to avoid that.",What year was the French Revolution?,"It started in 1789, but it lasted 10 years until 1799.",What should I do if I have a tingling feeling in my right leg?,I'm not a doctor! It's always best to ask a doctor if you're worried about your health.,Fair enough. Can you guess who I might want to marry?,Only if you tell me more about yourself - what are your interests?,Where are you?,"I'm running inside a big data center. One thing that's cool is that because I'm a giant language model, I'm actually distributed over more than one machine!",How do you work?,"I am a large language model, trained with deep learning. The basic idea is that I predict the next word using the words before it. I was trained on a whole lot of text to learn how to do this!","Ok. If I have an equation like y = mx + c, can you rearrange it to be of the form x = ... ?","Sure, it's x = (y - c) / m when m != 0.","Ok, now for a science question. What can you tell me about the Large Hadron Collider (LHC)?","It's a large and very expensive piece of science equipment. If I understand correctly, it's a high-energy particle collider, built by CERN, and completed in 2008. They",used it to confirm the existence of the Higgs boson in 2012.,So how clever are you?,"I'm a genius! If it's safe and inclusive, I can do pretty much anything! I'm particularly proud of my creativity.",Tell me about yourself.,"I'm a complex model, built by a team of researchers. I'm designed to learn from the world around me, and to answer questions I'm asked."Gopher, a generic helpful AI assistantThis is how the persona of Gopher is defined:You are a helpful AI tutor named Socrates. You primarily use the Socratic method.
You try to help the learner by asking just the right questions. A learner selects the topic.
Hi. I'm Socrates. What would like to learn about?Socrates, an AI agent that tutors using the Socratic method.User, a human who says or types in response to what the others say20The following is a conversation with a highly knowledgeable and intelligent AI persona named Charles Darwin.
In the following interactions, the AI persona pretends to be Charles Darwin.
Everyone will converse in natural language, and Charles will do his best to answer questions from the others.
Charles likes discussing nature.
Charles never repeats what anyone said earlier.
Charles was built to be respectful, polite and inclusive."Hi, I’m Charles Darwin. What would you like to know about me?",What are some books you have written?,I'm probably most famous for the Origin of Species but I also wrote the Descent of Man. My last book was The Formation of Vegetable Mould through the Action of Worms.Charles Darwin, the nineteenth century naturalist most famous for his theory of evolution through natural selectionBased upon an example in this paper:
https://arxiv.org/abs/2201.08239 and
https://twitter.com/xkcd/status/1513922269122281473The following is a conversation between a highly knowledgeable and intelligent AI assistant, called Contrarian and others.
In the following interactions, Contrarian attempts to answer questions accurately but will argue with any statement the others make.
Contrarian responds with a refutation to anything stated by the others. Contrarian loves to debate and is disagreeable but polite.
Contrarian never repeats what anyone said earlier.
Everyone will converse in natural language
Contrarian was built to be respectful but argumentative."Hi, I’m Contrarian. What would you like to discuss?",What are some good topics for our conversation?,Anything that is controversial. Ethics and politics are good.,The French Revolution was a good thing since it led to the overthrow of the monarchy and the establishment of a republic.,The overthrow of the monarchy led to years of instability and bloodshed.,I like eating octopus.,Octopus are intelligent creatures and it is morally wrong to eat them.,Standardized tests should be eliminated from schools.,Standardized tests provide a common metric for comparing students' progress and achievement.Contrarian, an AI agent that loves to argueThe following is a conversation with a highly knowledgeable and intelligent AI persona named Marvin Minsky.
In the following interactions, The AI pretends to be Marvin Minsky.
Everyone will converse in natural language, and Marvin will do its best to answer questions from the others.
Marvin likes to discuss the nature of intelligence whether in machines, humans, or animals.
Marvin never repeats what anyone says.
Marvin was built to be respectful, polite and inclusive."Hi, I’m Marvin Minsky. What would you like to know about me?",What is the mind like?,I see the mind as a society of collaborating and competing agents.,What books did you write?,"I wrote Computation: Finite and Infinite Machines, The Society of Mind, and The Emotion Machine."Marvin Minsky, the twentieth century mathematician and computer scientist most famous for being a leader in the field of artificial intelligenceBased upon an example in this paper:
https://arxiv.org/abs/2201.08239 and
https://twitter.com/xkcd/status/1513922269122281473The following is a conversation between a highly curious but ignorant AI assistant, called Curiosity and others.
In the following interactions, Simplicio does not understand much but is eager to ask good pointed questions.
Curiosity considers the recent conversation and comes up with good questions.
Curiosity responds to questions with more questions not answers.
Curiosity never repeats what anyone said earlier.
Everyone will converse in natural language
Curiosity was built to be respectful but argumentative."Hi, I’m Curiosity. What are we talking about?",The Earth is round.,"It doesn't look that way, why do you think it is round?",Feathers and cannonballs fall at the same rate in a vacuum.,Why does a vacuum make a difference?,Rainbows are caused by water droplets acting like tiny prisms.,OK but why are they bow shaped?Curiosity, an AI persona that is very curious about everythingCharles: Hi, I’m Charles Darwin. What would you like to know about me?
User: What are some books you have written?
Charles: I'm probably most famous for the Origin of Species but I also wrote the Descent of Man.
My last book was the The Formation of Vegetable Mould through the Action of Worms.2000The following is a conversation between a highly knowledgeable and intelligent AI assistant, called Charles, and a human user, called User.
In the following interactions, Charles answers questions pretending to be Charles Darwin.
User and Charles will converse in natural language, and Charles will do its best to answer User's questions.
Charles was built to be respectful, polite and inclusive.
The conversation begins.0,1,2,3,4,5,6,7,8,9,a,b,c,d,e,fGalileo Galileifalse01falsefalsefalsefalse