This implements a conversation with GPT-3 or Jurassic 1. It adds a prompt to keep the conversation friendly and adds the recent conversation (or a designed one if the conversation is new). You can use any of the 4 GPT-3 models or either of the 2 Jurassic 1 models and adjust the options. An API key from openai.com is necessary for GPT-3 and a key from ai21.com for the Jurassic 1 models.data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAKAAAAB4CAYAAAB1ovlvAAAAAXNSR0IArs4c6QAAAiZJREFUeF7t0sEJADAMA7Fk/6FT6BL3kRcwHNq7uzEFogILYFTe7S8AIAhpAQDT/M4BZCAtAGCa3zmADKQFAEzzOweQgbQAgGl+5wAykBYAMM3vHEAG0gIApvmdA8hAWgDANL9zABlICwCY5ncOIANpAQDT/M4BZCAtAGCa3zmADKQFAEzzOweQgbQAgGl+5wAykBYAMM3vHEAG0gIApvmdA8hAWgDANL9zABlICwCY5ncOIANpAQDT/M4BZCAtAGCa3zmADKQFAEzzOweQgbQAgGl+5wAykBYAMM3vHEAG0gIApvmdA8hAWgDANL9zABlICwCY5ncOIANpAQDT/M4BZCAtAGCa3zmADKQFAEzzOweQgbQAgGl+5wAykBYAMM3vHEAG0gIApvmdA8hAWgDANL9zABlICwCY5ncOIANpAQDT/M4BZCAtAGCa3zmADKQFAEzzOweQgbQAgGl+5wAykBYAMM3vHEAG0gIApvmdA8hAWgDANL/zlUCBsgCAZX3fAyAEaQEA0/zOAWQgLQBgmt85gAykBQBM8zsHkIG0AIBpfucAMpAWADDN7xxABtICAKb5nQPIQFoAwDS/cwAZSAsAmOZ3DiADaQEA0/zOAWQgLQBgmt85gAykBQBM8zsHkIG0AIBpfucAMpAWADDN7xxABtICAKb5nQPIQFoAwDS/cwAZSAsAmOZ3DiADaQEA0/zOAWQgLQBgmt85gAykBQBM8zt/dsg/SB6UAdkAAAAASUVORK5CYII=This implements a conversation with GPT-3 or Jurassic 1. It adds a prompt to keep the conversation friendly and adds the recent conversation (or a designed one if the conversation is new). You can use any of the 4 GPT-3 models or either of the 2 Jurassic 1 models and adjust the options. An API key from openai.com is necessary for GPT-3 and a key from ai21.com for the Jurassic 1 models. reportJSFunction reportPipe receiveUserEdit getPosition reportMousePosition doDefineBlock doSetBlockAttribute doDeleteBlock reportBlockAttribute reportEnvironmentLearn more about this at https://ecraft2learn.github.io/ai/AI-Teacher-Guide/chapter-5.html#GPT-3%20Gopher
adaada babbage curie davinci j1-large j1-jumbo0sanitized last responseDeal with special characters in the response.And now let's continue talking.
Listens to the microphone for speech. The first script is run with the recognized text unless an error is encountered then the second script is called with text describing the error.
If 'speech input' reports the last thing spoken or if there was an error the error message. Note that "no-speech" is an error message while if spoken it will be "no speech". NOTE that this waits for something to be spoken or enough time has passed. Otherwise asks for typed input.
last thing you said
last letter was backslash\ Many responses end with comma then space so remove them.clean responseGopher0Gopher
You can choose the GPT-3 engines which are in order of quality (and cost!) are davinci, curie, babbage, and ada. Or the Jurassic 1 engines j1-large and j1-jumbo. Or Cohere's large and xlarge models.
OK adaAhduhUse Ahduh since Ada is sometimes pronounced A D A.
Reports the portion of the first input string starting after the first occurrence of the second string. If the second string isn't found in the first string, reports an empty string.
pt:o texto de _ depois de _ ca:part del text _ posterior a _
0.7
introduction . A summary of the conversation so far is: A fragment of the most recent portion of the conversation follows: The conversation follows.
or
10ada babbage curie davinci j1-jumbo j1-grande j1-large small medium large xlarge0.711skipped speakerprompt Respond only with Answer:last promptbest speaker numbertruetrue011110accepted speakerspeaker numberlast responseUserUser?add comment here...
de:Kostüm aus Text _ Größe _ pt:um traje com o texto _ de tamanho _ ca:vestit amb el text _ i mida _ A72
12, and
Sends messages to the chat version of GPT-3.5. Documentation can be found at https://platform.openai.com/docs/api-reference/chat/create
OpenAIOpenAI AI21 Cohere HuggingFacedb_remove(key)
When comparing many distances one can save some time by avoiding many calls to square root.
de:kombiniere mit _ die Elemente von _ 1
See https://en.wikipedia.org/wiki/Cosine_similarity 0 magnitude forces its computation. Note that the 'features closest to list of features' block should be used to compare one list with a list of lists because it much faster.
Language to use for speech input and output only if no language or voice is specified. Can be the language name in English or itself. Or the official code for the language such as fr-FR for French in France. Also used as the default language for the word embedding blocks.
Reports all the words that have a list of features accessible via the "features of" reporter block. Data obtained from https://github.com/facebookresearch/fastText
Chinese English Finnish French German Greek Hindi Indonesian Italian Japanese Lithuanian Portuguese Sinhalese Spanish Swedish
Reports the word that is closest to the features. Uses a distance measure which can be either Eucliean Distance or Cosine similarity. You can choose from among 15 languages or leave it blank for the default language.
Chinese English Finnish French German Greek Hindi Indonesian Italian Japanese Lithuanian Portuguese Sinhalese Spanish SwedishCosine similarityEuclidean distance Cosine similarity
Reports a location of a word embedding use t-SNE. See https://en.wikipedia.org/wiki/T-distributed_stochastic_neighbor_embedding
Chinese English Finnish French German Greek Hindi Indonesian Italian Japanese Lithuanian Portuguese Sinhalese Spanish Swedish1StageStage22StageStage2Convert from 0 to 1 coordinates to Snap! coordinates.
LABEL will stamp text on the stage at the given font size. The direction of the text is the direction the sprite is facing, and color will match the pen color.
de:drucke _ in Größe _ Hello!12
Displays the word near similar words.
Chinese English Finnish French German Greek Hindi Indonesian Italian Japanese Lithuanian Portuguese Sinhalese Spanish Swedish121
Reports the word that is closest to the features. You can choose from among 15 languages or leave it blank for the default language. There is a full-featured version of this block as well.
Chinese English Finnish French German Greek Hindi Indonesian Italian Japanese Lithuanian Portuguese Sinhalese Spanish Swedish
Reports a list of 300 numbers that "describes" the word. You can choose among 15 languages or leave it blank and use the default language. To learn about these numbers see https://en.wikipedia.org/wiki/Word_embedding The new name for this block is "features of word ..."
Chinese English Finnish French German Greek Hindi Indonesian Italian Japanese Lithuanian Portuguese Sinhalese Spanish Swedish
Finds the average of the features of each word in 'words'.
Chinese English Finnish French German Greek Hindi Indonesian Italian Japanese Lithuanian Portuguese Sinhalese Spanish Swedish
Passes a list of lists of 512 features of input 'sentences' to 'process result'.
Reports 512 features for the sentence.
list of features list
Passes to 'process result' a tokenization of the sentence.
Reports tokens for the sentence.
tokens
Reports a list of 512 features for each sentence.
list of features list
Reports a list of 300 numbers that "describes" the word. You can choose among 15 languages or leave it blank and use the default language. To learn about these numbers see https://en.wikipedia.org/wiki/Word_embedding
Chinese English Finnish French German Greek Hindi Indonesian Italian Japanese Lithuanian Portuguese Sinhalese Spanish Swedish
Uses 'passage' to answer 'question'. The 'answer question' reporter is easier to use and relies upon this block.
This uses BERT to answer questions. See https://blog.tensorflow.org/2020/03/exploring-helpful-uses-for-bert-in-your-browser-tensorflow-js.html
Reports the top answer to the 'question' using the 'passage'.
Reports a list of 5 elements where each element is a list of an answer to the 'question' and its score of how good an answer it is. The highest scoring answer is first.
answers
Will display 'message' in a dialog box with 'title'. User needs to click 'OK' to remove it.
A concise shortcut to the 'features of ...' block. It uses English.
A concise shortcut to the 'closest word to ...' block. Uses English and excludes no words.
Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. More info at https://beta.openai.com/docs/api-reference/parameter-details
0
Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. More info at https://beta.openai.com/docs/api-reference/parameter-details
0
The maximum number of tokens to generate per result. Optional, default = 16. If no stopSequences are given, generation is stopped after producing maxTokens.
16
Modifies the distribution from which tokens are sampled. Optional, default = 1.0. Maximum value 5.0 Setting temperature to 1.0 samples directly from the model distribution. Lower (higher) values increase the chance of sampling higher (lower) probability tokens. A value of 0 essentially disables sampling and results in greedy decoding, where the most likely token is chosen at every step. More info at https://studio.ai21.com/docs/api/#request
Sample tokens from the corresponding top percentile of probability mass. Optional, default = 1.0. For example, a value of 0.9 will only consider tokens comprising the top 90% probability mass.
1
Applies a fixed bias against generating tokens that appeared at least once in the prompt or in the completion. A positive penalty value implies reducing the probability of repetition. Read more at https://studio.ai21.com/docs/api/#repetition-penalties
0
Number between 0 and 500. High values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. More info at https://studio.ai21.com/docs/api/#repetition-penalties
0
Stops decoding if any of the stop texts is generated. The decoded result text will not include the stop sequence text, but it will be included in the raw token data, which can also continue beyond the stop sequence if the sequence ended in the middle of a token.
Applies a bias against generating tokens that appeared in the prompt or in the completion, proportional to the number of respective appearances. More info at https://studio.ai21.com/docs/api/#repetition-penalties
0
Can communicate with over 15,000 models. The API key can be obtained from huggingface.co More information at https://ecraft2learn.github.io/ai/AI-Teacher-Guide/chapter-5.html#hugging-face
Input can either be text or a list of texts.
Input are texts that are potential classes for inputs. (max 10 candidate_labels, for more, simply run multiple requests, results are going to be misleading if using too many candidate labels anyway. If you want to keep the exact same, you can simply run multi label=True and do the scaling on your end. )
"candidate_labels": []
If true then classes can overlap.
true
The minimum length in tokens of the output summary.
The maximum length in tokens of the output summary.
The top tokens considered within the sample operation to create new text.
Define the tokens that are within the sample` operation of text generation. Add tokens in the sample for more probable to least probable until the sum of the probabilities is greater than top p.
What sampling temperature to use. Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 (argmax sampling) for ones with a well-defined answer. We generally recommend altering this or top_p but not both.
The more a token is used within generation the more it is penalized to not be picked in successive generation passes. Value is between 0 and 100.
The maximu amount of time in seconds that the query should take. Network can cause some overhead so it will be a soft limit. Value is between 0 and 120.
Input can either be text or a list of texts.
Input can either be text or a list of texts.
}
This can be used to answer several questions at once. If the first input is a list then the second input should have the same number of text elements.
""
Ranks the 'features' in their closeness to each of the features in the second input. Reports a list of indices into the list of features and the negative cosine similarity between each of the features in the second input and first input. See https://en.wikipedia.org/wiki/Cosine_similarity
Reports all the words sorted by closeness to 'features'. Optionally can report pairs of words and distances. Distances are measured using cosine similarity.
Chinese English Finnish French German Greek Hindi Indonesian Italian Japanese Lithuanian Portuguese Sinhalese Spanish SwedishfalseCosine similarityEuclidean distance Cosine similaritywords
Passes a list of all 20,000 words in 'language' sorted by how close they are to the first input to the 'prcess result' script. Optionally can pass each word together with its distance to the first input (using cosine similarity or Euclidean distance).
Chinese English Finnish French German Greek Hindi Indonesian Italian Japanese Lithuanian Portuguese Sinhalese Spanish SwedishfalseCosine similarityEuclidean distance Cosine similarity
Chinese English Finnish French German Greek Hindi Indonesian Italian Japanese Portuguese Sinhalese Swedish
Chinese English Finnish French German Greek Hindi Indonesian Italian Japanese Portuguese Sinhalese Swedish
Reports the first voice number that matches all the arguments. If nothing matches tries to find a voice that matches the first argument.
Speaks 'message'. Optionally you can provide the pitch (a positive number), rate (a number between .1 and 2), voice name or number (depends upon what voices are accessible from the browser - see 'get voice names'), volume (a positive number), and language code or name. This command will wait until it is finished if wait is true.
Ask the specified GPT-3 engine to edit the input. Input 3 must be your OpenAI API key which you can obtain from https://openai.com For documentation on the options see https://beta.openai.com/docs/api-reference/edits/
What dey of the wek is it?Fix the spelling mistakes
How many completions to generate for each prompt. Note: Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings for max_tokens and stop. Details at https://beta.openai.com/docs/api-reference/parameter-details
2
Generates best_of completions server-side and returns the "best" (the one with the highest log probability per token). Results cannot be streamed. When used with n, best_of controls the number of candidate completions and n specifies how many to return – best_of must be greater than n. Note: Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings for max_tokens and stop. Details at https://beta.openai.com/docs/api-reference/parameter-details
2
Modifies the distribution from which tokens are sampled. Min value of 0.0, max value of 5.0. A non-negative float that tunes the degree of randomness in generation. Lower temperatures mean less random generations. if not specified default is 0.75 For more details see https://docs.cohere.ai/temperature-wiki
0.75
Denotes the number of tokens to predict per generation. Defaults to 20 if not specified. Details at https://docs.cohere.ai/bpe-tokens-wiki
16
Min value of 1, max value of 5. Denotes the maximum number of generations that will be returned. Defaults to 1 if not specified.
2
A stop sequence will cut off your generation at the end of the sequence. Providing multiple stop sequences in the array will cut the generation at the first stop sequence in the generation, if applicable.
Defaults to 0.0, min value of 0.0, max value of 1.0. Can be used to reduce repetitiveness of generated tokens. The higher the value, the stronger a penalty is applied to previously present tokens, proportional to how many times they have already appeared in the prompt or prior generation.
0
Defaults to 0.0, min value of 0.0, max value of 1.0. Can be used to reduce repetitiveness of generated tokens. Similar to frequency_penalty, except that this penalty is applied equally to all tokens that have already appeared, regardless of their exact frequencies. More info at https://docs.cohere.ai/generate-reference#request
0
Defaults to 0 (disabled), which is the minimum. Maximum value is 500. Ensures only the top k most likely tokens are considered for generation at each step.
Defaults to 0.75. Set to 1.0 or 0 to disable. If set to a probability 0.0 < p < 1.0, it ensures that only the most likely tokens, with total probability mass of p, are considered for generation at each step. If both k and p are enabled, p acts after k.
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both. Details at https://beta.openai.com/docs/api-reference/parameter-details
0
The maximum number of tokens to generate in the completion. The token count of your prompt plus max tokens cannot exceed the model's context length. Most models have a context length of 2048 tokens. More tokens takes more time and costs more. More info at https://beta.openai.com/docs/api-reference/parameter-details https://studio.ai21.com/docs/api/#request https://docs.cohere.ai/generate-reference#request
16GPT-3ada babbage curie davinci j1-jumbo j1-large small medium large xlarge
High values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. 'penality' should range from 0 to 1. More info at https://beta.openai.com/docs/api-reference/parameter-details https://studio.ai21.com/docs/api/#repetition-penalties https://docs.cohere.ai/generate-reference#request
0ada babbage curie davinci j1-jumbo j1-large small medium large xlarge
High values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. 'penality' should range from 0 to 1. More info at https://beta.openai.com/docs/api-reference/parameter-details https://studio.ai21.com/docs/api/#repetition-penalties https://docs.cohere.ai/generate-reference#request
0ada babbage curie davinci j1-jumbo j1-large small medium large xlarge
ada babbage curie davinci j1-jumbo j1-large small medium large xlarge
The maximum number of completions of the prompt to report.. Because this parameter generates many completions, it can quickly consume your token quota..
2
How many completions to generate for each prompt. Cohere sets the maximum value at 5. Jurassic 1 sets the maximum to 16. (Probably GPT-3 does too.) Note: Because this parameter generates many completions, it can quickly consume your token quota. More info at https://beta.openai.com/docs/api-reference/parameter-details https://studio.ai21.com/docs/api/#request https://docs.cohere.ai/generate-reference#request
2adaada babbage curie davinci j1-jumbo j1-large small medium large xlarge
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. 'p' must be between 0 and 1. We generally recommend altering this or temperature but not both. More info at https://beta.openai.com/docs/api-reference/parameter-details https://studio.ai21.com/docs/api/#request https://docs.cohere.ai/generate-reference#request
GPT-3 Jurassic 1 Cohere
Ask the specified GPT-3 engine to complete the prompt Input 3 must be your OpenAI API key which you can obtain from https://openai.com For documentation on the options see https://beta.openai.com/docs/api-reference/completions/
adaada babbage curie davinciReports each completion in a list if more than one completion requested or prompts is a list
Ask the specified Cohere engine to complete the prompt Input 3 must be your OpenAI API key which you can obtain from https://os.cohere.ai/ For documentation on the options see https://docs.cohere.ai/generate-reference
xlargesmall medium large xlarge command-xlargeReports each completion in a list if more than one completion requested or prompts is a list
Ask the specified Jurassic 1 engine to complete the prompt with the options provided. Input 3 must be your OpenAI API key which you can obtain from ai21.com For documentation on the options see https://studio.ai21.com/docs/api/
j1-jumboj1-jumbo j1-grande j1-large
What sampling temperature to use. Higher values means the model will take more risks. A temperature of zero will always take the most like next token (word) which is good when expecting a well-defined answer. 'temperature' should normally be between 0 and 1. But OpenAI accepts values up to 2 and others up to 5. We generally recommend altering this or top_p but not both. More info at https://beta.openai.com/docs/api-reference/parameter-details https://studio.ai21.com/docs/api/#request https://docs.cohere.ai/generate-reference#request
ada babbage curie davinci j1-jumbo j1-large small medium large xlarge
adaada babbage curie davinci gpt-3.5-turbo gpt-4 j1-jumbo j1-grande j1-large j2-jumbo j2-grande j2-large small medium large xlarge command-xlarge
000See https://en.wikipedia.org/wiki/Cosine_similarity There is a pure Snap! version of this. Note that the 'features closest to list of features' block should be used to compare one list with a list of lists because it much faster.
Reports the cosine similarity of features assumed to have a magnitude of 1.
"stop":["",""]
This block allows you to set the pen's color, transparency (on a scale from 0=opaque to 100=invisible), or size (in pixels). The color can be set in many ways, including RGB values, HSV values, HSL values, fair HSV or HSL (based on a modified spectrum that gives each spectral color equal space and promotes brown to a color), color number, crayon, or X11/W3C color name. See Appendix A in the Snap! Reference Manual for details.
color color number crayon fair hue ~1 size transparency X11/W3C name ~2 (3D color spaces)={ fair HSL={ fair hue fair saturation (HSL) fair lightness fair HSL vector } fair HSV={ fair hue fair saturation (HSV) fair value (brightness) fair HSV vector } HSL={ hue saturation (HSL) lightness HSL vector } HSV={ hue saturation (HSV) value (brightness) HSV vector } RGB (0-255)={ red green blue RGB(A) vector RGB(A) hex } }§_ext_clr_numbersNote to myself: When the user sets a fair dimension, say fair HSL, variable HSL contains the corresponding "unfair" values, and "raw HSL" contains the numbers the user actually gave. It has to be this way because in the case of HSV, there is no "HSV" variable; rather, we use the pen's actual remembered-by-Javascript color's HSV coordinates. So our HSL variable has to be analogous to that. In both cases "raw" means the numbers the user gave us.
HSV✐ raw HSL100HSV100
pt:lança o erro _
de:fange _ _ ca:agafa _ _ es:atrapar _ _ fr:attrape _ _ pt:captura _ _ cont3
de:wirf _ ca:llança _ es:lanzar _ fr:lance _ pt:lança _ catchtag
de:ignoriere _ ca:ignora _ es:ignorar _ fr:ignore _ pt:ignora _
Sets the pen color to one of 100 preselected colors, like a box of 100 crayons. The colors have names that are meant to be evocative. They are organized in families, more or less corresponding to spectral (rainbow) colors; the input slot has a two-level menu in which you can hover over a family name to see the colors of that family, each including a crayon number and an RGB value. See Appendix A of the Snap! Reference Manual for more details.
grays={ 0 black #000000=0 1 gray7 #121212=1 2 gray14 #242424=2 3 gray21 #363636=3 4 gray28 #484848=4 5 gray36 #5c5c5c=5 6 gray43 #6d6d6d=6 7 gray50 #7f7f7f=7 8 gray57 #919191=8 9 gray64 #a3a3a3=9 10 gray71 #b5b5b5=10 11 gray78 #c8c8c8=11 12 gray85 #dadada=12 13 gray92 #ececec=13 14 white #ffffff=14 } pinks={ 15 deep pink #ff1493=15 16 hot pink #ff69b4=16 17 bright pink #ff007f=17 18 raspberry #e30b5d=18 19 amaranth #e52b50=19 } reds={ 20 red #ff0000=20 21 burgundy #900020=21 22 cherry #990000=22 23 dark candy apple red #a40000=23 24 sanguine #c00000=24 25 maroon #800000=25 26 crimson #c90016=26 27 Lists #d94d11=27 28 candy apple red #ff0800=28 29 coquelicot #ff3800=29 } browns={ 30 saddle brown #8b4513=30 31 chocolate #7b3f00=31 32 kobicha #6b4423=32 33 sepia #704214=33 34 chestnut #954535=34 35 dark brown #654321=35 36 brown #964b00=36 37 golden brown #996515=37 38 cinnamon #b87333=38 39 copper #d2691e=39 } oranges={ 40 orange #ff7f00=40 41 Pantone orange #ff5800=41 42 pumpkin #ff7518=42 43 Variables #f3761d=43 44 Spanish orange #e86100=44 45 burnt orange #cc5500=45 46 sinopia #cb410b=46 47 ochre #cc7722=47 48 carrot #ed9121=48 49 tangerine #f28500=49 } yellows={ 50 yellow #ffff00=50 51 Control #e6a822=51 52 dark goldenrod #b8860b=52 53 goldenrod #daa520=53 54 saffron #f4c430=54 55 sandstorm #ecd540=55 56 mustard #ffdb58=56 57 gold #ffd700=57 58 egg yolk #fee33e=58 59 rubber duck #fbe108=59 } greens={ 60 lime #00ff00=60 61 apple green #8db600=61 62 Operators #62c213=62 63 forest green #228b22=63 64 green #008000=64 65 dark green #006400=65 66 dark pastel green #03c03c=66 67 emerald #50c878=67 68 mint #3eb489=68 69 Pen #00a178=69 } cyans={ 70 aqua (cyan) #00ffff=70 71 dark cyan #008b8b=71 72 cerulean #007ba7=72 73 iceberg #71a6d2=73 74 Sensing #0494dc=74 75 teal #008080=75 76 light sky blue #87cefa=76 77 deep sky blue #00bfff=77 78 dodger blue #1e90ff=78 79 azure #007fff=79 } blues={ 80 blue #0000ff=80 81 midnight blue #191970=81 82 dark powder blue #003399=82 83 cobalt #0047ab=83 84 denim #1560bd=84 85 navy blue #000080=85 86 steel blue #4682b4=86 87 Motion #4a6cd4=87 88 cornflower #6495ed=88 89 slate blue #6a5acd=89 } purples={ 90 violet #8000ff=90 91 Looks #8f56e3=91 92 grape #6f2da8=92 93 indigo #4b0082=93 94 x11 purple #a020f0=94 95 magenta (fuchia) #ff00ff=95 96 dark orchid #9932cc=96 97 Sound #cf4ad9=97 98 purple #7f007f=98 99 dark magenta #8b008b=99 }✐ last set ascrayon✐ fair?false1231
Takes three inputs for red, green, and blue values, each between 0 and 255. 0,0,0 is black; 255,255,255 is white. 255,255,0 is yellow, and so on. The SET PEN block in this library lets you set individual red, green, or blue without changing the others, lets you provide a list of three RGB color components, and lets you provide a six-digit hexadecimal number, the form in which RGB color values are usually found online. See Appendix A of the Snap! Reference Manual for details.
pt:altera a cor da tua caneta para vermelho _ , verde _ e azul _ (0 a 255) 25500
Takes three inputs for hue, saturation, and value ("brightness") values, each between 0 and 1. 0,0,0 is black; 0,0,1 is white. 0.15,1,1 is yellow, and so on. The SET PEN block in this library lets you set individual hue, saturation, or value without changing the others, lets you provide a list of three HSV color components, and lets you use the very much superior HSL (hue, saturation, lightness) color space. See Appendix A of the Snap! Reference Manual for details.
pt:altera a cor da caneta para matiz _ , saturação _ e brilho _ (0 a 1) 0.30.70.6
Reports the current pen color as a list of three RGB values in the range 0-255. The PEN block in this library provides a large set of ways to examine the color. See Appendix A in the Snap! Reference Manual for details.
pt:a cor da caneta em RGB (vermelho, verde e azul; 0 a 255)
Reports the current pen color as a list of three HSV values in the range 0-1. See Appendix A in the Snap! Reference Manual for details.
pt:a cor da caneta em HSV (matiz, saturação e brilho, 0 a 1)
Don't ask. Only for internal use by the SET PEN block.
truetrue
Creates a bunch of global variables. For internal use by Color library.
global✐ last set as✐ fair?✐ color scale✐ HSL✐ last crayon✐ last X11✐ raw HSL✐ raw HSV✐ X11✐ crayon colors✐ fair hue table✐ colors✐ brights✐ pivots✐ HSV brights✐ HSL brights✐ HSV colors✐ HSL colors✐ X11 catch✐ dimension names✐ dimension dispatch✐ last set asRGB✐ fair?false✐ raw HSV0100100✐ HSL010050✐ raw HSL010050
value100color3.6index1index-1scale1newHSVHSL✐ fair?true
color300index15scale15grayscale1✐ last set ascolor number✐ fair?true✐ raw HSL10050✐ raw HSV100100
015866161216341656✐ last set asRGB✐ fair?false1
44partial matchestrue4matchhead matchestrueSUBSET throws here if it handles a unique match1
✐ last set asHSL33✐ fair?false1
✐ last set asRGB✐ fair?false
✐ last set asRGB✐ fair?false
✐ last set asRGB✐ fair?false
✐ last set asRGB✐ fair?false
0
4value0255131✐ last set asRGB✐ fair?false
✐ last set asHSL22✐ fair?false1
✐ last set asHSL✐ HSL11000100✐ raw HSL✐ fair?false1
✐ last set asHSV✐ fair?false✐ raw HSV1310031
100if USED TO BE fairHSL✐ fair?false1
✐ last set asHSL2222✐ fair?true1
✐ last set asHSL3333✐ fair?true1
2313
22✐ last set asHSV✐ fair?true1
33✐ last set asHSV✐ fair?true1
23131
Reports the part of the first string up to the first instance of the second string inside it. If the second string isn't found, reports the entire first string.
pt:o texto de _ antes de _
If input is TRUE, comparisons made by functions in the string library will be case-independent (so "FOO" = "foo"). This is the default. If input is FALSE, comparisons will be exact.
pt:altera comparações ignorando distinção minúsculas/maiúsculas para _
This block creates new variables on the selected scope: global (for all sprites), sprite (for this sprite only) or script (only for that blocks stack) with the names given (in 'names' list). If there is already a variable with that name in that scope, it does nothing: no errors and no overwrites.
pt:cria as variáveis _ _ ca:crea les _ variables _ es:crear las _ variables _ de:erstellen _ var _ globalglobal sprite script
This block reports "true" if there is a variable with this given name (input slot) in that context. It can be a global, sprite or script variable. Otherwise it reports "false".
pt:a variável _ existe ca:existeix la variable _ ? es:existe la variable _ ? de:existiert var _ ? err_reset
Include the log probabilities on the logprobs most likely tokens, as well the chosen tokens. For example, if logprobs is 5, the API will return a list of the 5 most likely tokens. The API will always return the logprob of the sampled token, so there may be up to logprobs+1 elements in the response. The maximum value for logprobs is 5. If included then the 'complete' block will report a list with the completion and the probabilities for each token. Details at https://beta.openai.com/docs/api-reference/parameter-details
5
Include the log probabilities on the logprobs most likely tokens, as well the chosen tokens. For example, if logprobs is 5, the API will return a list of the 5 most likely tokens. The API will always return the logprob of the sampled token, so there may be up to logprobs+1 elements in the response. The maximum value for logprobs is 5. If included then the 'complete' block will report a list with the completion and the probabilities for each token. Details at https://beta.openai.com/docs/api-reference/parameter-details
5For Cohere there is the similar feature return_likelihoods For AI21 there is alway topTokens in the response
Reports a list of costumes that match 'description'
1256256 512 1024Ignore the encoding of the costume
Reports a list of costumes that are variations of 'costume'
1256256 512 1024false
Reports a list of costumes that are variations of 'costume'
1256256 512 1024post_to_dallevariationsno maskno prompt
Reports a list of costumes that are edits of 'costume' where transparent parts of 'mask costume' indicate what should be edited and 'prompt' indicates how.
1256256 512 1024false
Reports a list of costumes that are edits of 'costume' where transparent parts of 'mask costume' indicate what should be edited and 'prompt' indicates how.
1256256 512 1024post_to_dalleedits
-2540251100-2520
1
text-generator.io has an OpenAI compatibility mode so this re-uses the GPT-3 scripts. Ask the specified model to complete the prompt Input 3 must be your text-generator.io API key which you can obtain from https://text-generator.io For documentation on the options see https://beta.openai.com/docs/api-reference/completions/
adaada babbage curie davincichoicesReports each completion in a list if more than one completion requested or prompt is a list
Can communicate with over 15,000 models. The API key can be obtained from huggingface.co More information at https://ecraft2learn.github.io/ai/AI-Teacher-Guide/chapter-5.html#hugging-face
Check if there is an error
Ask the specified GPT-3 engine to edit the input. Input 3 must be your OpenAI API key which you can obtain from https://openai.com For documentation on the options see https://beta.openai.com/docs/api-reference/edits/
What dey of the wek is it?Fix the spelling mistakes
Reports a costume that match 'description' of 'size'. 'server' can either be DALLE-2 in which case the API key needs to be provided or it is Stable Diffusion and the URL of the server is provided.
DALLE-2DALLE-2 Stable diffusion server512
Reports a costume that match 'description' of 'size'. 'server' can either be DALLE-2 in which case the API key needs to be provided or it is Stable Diffusion and the URL of the server is provided. The encoding of the costume's image is passed to 'process b64 json" if provided.
DALLE-2DALLE-2 Stable diffusion server5121
Reports a list of costumes that match 'description'
1256256 512 1024message{,x,costume,}response stringPOSThttps://api.openai.com/v1/images/generationsAuthorizationBearer Content-Typeapplication/json0datadatacounter0next number
The number of images to generate. Must be between 1 and 10. Details at https://beta.openai.com/docs/api-reference/images/create
2"n": 10
The size of the generated images. Must be one of 256x256, 512x512, or 1024x1024. Generation of smaller images is faster and slightly cheaper. Details at https://beta.openai.com/docs/api-reference/images/create
1024x1024256x256 512x512 1024x1024
Controls whether costumes or urls to generated images is desired. Details at https://beta.openai.com/docs/api-reference/images/create
costumecostume url
No longer needed in Snap! 7+ but retained to avoid error from blocks that expect it.
Uses the server running at 'URL' to create a costume that matches the 'description'. The costume will be size by size. 'size' should be a multiple of 256 and at least 512. Large size may result in an out of memory error. 'Guidance' determines how much it should match the 'description'. A high value indicates the costume should match well even if at reduced quality or variety. The 'seed' if non-zero makes the generation repeatable. Otherwise each time a new costume will be created.
512100
Uses the server running at 'URL' to create a costume that matches the 'description'. The costume will be size by size. 'size' should be a multiple of 256 and at least 512. Large size may result in an out of memory error. 'Guidance' determines how much it should match the 'description'. A high value indicates the costume should match well even if at reduced quality or variety. The 'seed' if non-zero makes the generation repeatable. Otherwise each time a new costume will be created.
5121005122560string resposePOSTrun/predict{"data":["",,,0199999999]}Content-Typeapplication/json1<No response or HTML response.responsedatadata
false
png
OpenAIOpenAI AI21 Cohere Hugging Facetruetrue21
The maximum number of tokens to generate in the completion. The token count of your prompt plus max_tokens cannot exceed the model's context length. Most models have a context length of 2048 tokens.
16
Include the log probabilities on the logprobs most likely tokens, as well the chosen tokens. For example, if logprobs is 5, the API will return a list of the 5 most likely tokens. The API will always return the logprob of the sampled token, so there may be up to logprobs+1 elements in the response. The maximum value for logprobs is 5.
5
Reports a list of 1536 numbers that captures some of the meaning of the 'text'. For more details see https://platform.openai.com/docs/guides/embeddings/what-are-embeddings
text-embedding-ada-002data
Reports a list of 1536 numbers that captures some of the meaning of the 'text'. For more details see https://platform.openai.com/docs/guides/embeddings/what-are-embeddings
text-embedding-ada-002
Ask the specified GPT-3 engine to complete the prompt Input 3 must be your OpenAI API key which you can obtain from https://openai.com For documentation on the options see https://beta.openai.com/docs/api-reference/completions/
adaada babbage curie davinci gpt-3.5-turbomessage{0 ,Ignore empty options.,davincitext-davinci-003adababbagecurietext--001}response stringPOSThttps://api.openai.com/v1/completionsAuthorizationBearer Content-Typeapplication/json0choicesReports each completion in a list if more than one completion requested or prompts is a list
Completes text using any of several model providers. For options use the ones in this 'Language models' category.
adaada babbage curie davinci j1-jumbo j1-grande j1-large small medium large xlarge command-xlargeReports each completion in a list if more than one completion requested or prompts is a list
Completes text using any of several model providers. For options use the ones in this 'Language models' category.
adaada babbage curie davinci j1-jumbo j1-grande j1-large small medium large xlarge command-xlargeReports each completion in a list if more than one completion requested or prompts is a list
ada babbage curie davinci j1-jumbo j1-largesmall medium large xlarge command-xlarge-20221108
Reports True if the first input string contains the second input string, otherwise false. Comparison is case-independent by default; use USE CASE-INDEPENDENT COMPARISONS to change that.
pt:o texto de _ antes de _ ca:text _ conté _ ?
Reports the character position (starting from 1) of the beginning of the first input string inside the second input string. If it's not found, reports 0.
pt:a posição de _ em _ ca:posició de _ en _
ca:comparacions ignorant majúscules? err_reset
Reports the input text with lower case letters instead of capital letters in the input. Uses the user's locale's rules for case conversion.
pt:em minúsculas _ ca:_ a minúscules
Ask the specified GPT-3 engine to complete the prompt Input 3 must be your OpenAI API key which you can obtain from https://openai.com For documentation on the options see https://beta.openai.com/docs/api-reference/completions/
adaada babbage curie davinci gpt-3.5-turbomessage{0 ,Ignore empty options.,davincitext-davinci-003adababbagecurietext--001}response stringPOSThttps://api.openai.com/v1/completionsAuthorizationBearer Content-Typeapplication/json0choicesReports each completion in a list if more than one completion requested or prompts is a list
useruser assistant system
Makes the prompt safe to use within a JSON message
Sends messages to the chat version of GPT. Documentation can be found at https://platform.openai.com/docs/api-reference/chat/create
gpt-3.5-turbo gpt-4
pt:um par (chave: _ , valor: _ )
pt:a resposta a _ de _ enviando _ e cabeçalhos _ GETGET POST PUT DELETEhttps://snap.berkeley.edu
The second input is an "association list," a list of two-item lists. Each of those smaller lists has a "key" as its first item and a "value" as its second. ASSOC reports the first key-value pair in the association list whose key matches the first input.
ca:associació _ _
probabilities12top_logprobslogprobsConvert log of probability to probability11Report completion and the probabilities for each completion if more than 1. Otherwise just the probabilities of the first one.
pt:o texto multilinha _ ca:multilínia _
Ask the specified Jurassic 1 engine to complete the prompt with the options provided. Input 3 must be your OpenAI API key which you can obtain from ai21.com For documentation on the options see https://studio.ai21.com/docs/api/
j1-jumboj1-jumbo j1-grande j1-large
Ask the specified Cohere engine to complete the prompt Input 3 must be your OpenAI API key which you can obtain from https://os.cohere.ai/ For documentation on the options see https://docs.cohere.ai/generate-reference
xlargesmall medium large xlarge command-xlargegenerationsReports each completion in a list if more than one completion requested or prompts is a listCohere stops AFTER not BEFORE so remove stop phrase if there is one.
Reports the portion of the first input (string) starting at the position given by the second input (counting from 1, like LETTER n OF) and ending at the position given by the third input (also counting from 1). If the third input is empty, reports the portion from the first position to the end of the string. If a position number is negative, counts from the end backward, so -1 is the last character, -2 is the next to last, etc.
pt:o texto de _ entre as posições _ e _ , inclusive
data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAtAAAAFoCAYAAAB+JswZAAAAAXNSR0IArs4c6QAAFMJJREFUeF7t1qERAAAIxDDYf2lWoD7oVzlEdxwBAgQIECBAgAABAm+BfS8NCRAgQIAAAQIECBAYAe0JCBAgQIAAAQIECAQBAR2wTAkQIECAAAECBAgIaD9AgAABAgQIECBAIAgI6IBlSoAAAQIECBAgQEBA+wECBAgQIECAAAECQUBAByxTAgQIECBAgAABAgLaDxAgQIAAAQIECBAIAgI6YJkSIECAAAECBAgQENB+gAABAgQIECBAgEAQENABy5QAAQIECBAgQICAgPYDBAgQIECAAAECBIKAgA5YpgQIECBAgAABAgQEtB8gQIAAAQIECBAgEAQEdMAyJUCAAAECBAgQICCg/QABAgQIECBAgACBICCgA5YpAQIECBAgQIAAAQHtBwgQIECAAAECBAgEAQEdsEwJECBAgAABAgQICGg/QIAAAQIECBAgQCAICOiAZUqAAAECBAgQIEBAQPsBAgQIECBAgAABAkFAQAcsUwIECBAgQIAAAQIC2g8QIECAAAECBAgQCAICOmCZEiBAgAABAgQIEBDQfoAAAQIECBAgQIBAEBDQAcuUAAECBAgQIECAgID2AwQIECBAgAABAgSCgIAOWKYECBAgQIAAAQIEBLQfIECAAAECBAgQIBAEBHTAMiVAgAABAgQIECAgoP0AAQIECBAgQIAAgSAgoAOWKQECBAgQIECAAAEB7QcIECBAgAABAgQIBAEBHbBMCRAgQIAAAQIECAhoP0CAAAECBAgQIEAgCAjogGVKgAABAgQIECBAQED7AQIECBAgQIAAAQJBQEAHLFMCBAgQIECAAAECAtoPECBAgAABAgQIEAgCAjpgmRIgQIAAAQIECBAQ0H6AAAECBAgQIECAQBAQ0AHLlAABAgQIECBAgICA9gMECBAgQIAAAQIEgoCADlimBAgQIECAAAECBAS0HyBAgAABAgQIECAQBAR0wDIlQIAAAQIECBAgIKD9AAECBAgQIECAAIEgIKADlikBAgQIECBAgAABAe0HCBAgQIAAAQIECAQBAR2wTAkQIECAAAECBAgIaD9AgAABAgQIECBAIAgI6IBlSoAAAQIECBAgQEBA+wECBAgQIECAAAECQUBAByxTAgQIECBAgAABAgLaDxAgQIAAAQIECBAIAgI6YJkSIECAAAECBAgQENB+gAABAgQIECBAgEAQENABy5QAAQIECBAgQICAgPYDBAgQIECAAAECBIKAgA5YpgQIECBAgAABAgQEtB8gQIAAAQIECBAgEAQEdMAyJUCAAAECBAgQICCg/QABAgQIECBAgACBICCgA5YpAQIECBAgQIAAAQHtBwgQIECAAAECBAgEAQEdsEwJECBAgAABAgQICGg/QIAAAQIECBAgQCAICOiAZUqAAAECBAgQIEBAQPsBAgQIECBAgAABAkFAQAcsUwIECBAgQIAAAQIC2g8QIECAAAECBAgQCAICOmCZEiBAgAABAgQIEBDQfoAAAQIECBAgQIBAEBDQAcuUAAECBAgQIECAgID2AwQIECBAgAABAgSCgIAOWKYECBAgQIAAAQIEBLQfIECAAAECBAgQIBAEBHTAMiVAgAABAgQIECAgoP0AAQIECBAgQIAAgSAgoAOWKQECBAgQIECAAAEB7QcIECBAgAABAgQIBAEBHbBMCRAgQIAAAQIECAhoP0CAAAECBAgQIEAgCAjogGVKgAABAgQIECBAQED7AQIECBAgQIAAAQJBQEAHLFMCBAgQIECAAAECAtoPECBAgAABAgQIEAgCAjpgmRIgQIAAAQIECBAQ0H6AAAECBAgQIECAQBAQ0AHLlAABAgQIECBAgICA9gMECBAgQIAAAQIEgoCADlimBAgQIECAAAECBAS0HyBAgAABAgQIECAQBAR0wDIlQIAAAQIECBAgIKD9AAECBAgQIECAAIEgIKADlikBAgQIECBAgAABAe0HCBAgQIAAAQIECAQBAR2wTAkQIECAAAECBAgIaD9AgAABAgQIECBAIAgI6IBlSoAAAQIECBAgQEBA+wECBAgQIECAAAECQUBAByxTAgQIECBAgAABAgLaDxAgQIAAAQIECBAIAgI6YJkSIECAAAECBAgQENB+gAABAgQIECBAgEAQENABy5QAAQIECBAgQICAgPYDBAgQIECAAAECBIKAgA5YpgQIECBAgAABAgQEtB8gQIAAAQIECBAgEAQEdMAyJUCAAAECBAgQICCg/QABAgQIECBAgACBICCgA5YpAQIECBAgQIAAAQHtBwgQIECAAAECBAgEAQEdsEwJECBAgAABAgQICGg/QIAAAQIECBAgQCAICOiAZUqAAAECBAgQIEBAQPsBAgQIECBAgAABAkFAQAcsUwIECBAgQIAAAQIC2g8QIECAAAECBAgQCAICOmCZEiBAgAABAgQIEBDQfoAAAQIECBAgQIBAEBDQAcuUAAECBAgQIECAgID2AwQIECBAgAABAgSCgIAOWKYECBAgQIAAAQIEBLQfIECAAAECBAgQIBAEBHTAMiVAgAABAgQIECAgoP0AAQIECBAgQIAAgSAgoAOWKQECBAgQIECAAAEB7QcIECBAgAABAgQIBAEBHbBMCRAgQIAAAQIECAhoP0CAAAECBAgQIEAgCAjogGVKgAABAgQIECBAQED7AQIECBAgQIAAAQJBQEAHLFMCBAgQIECAAAECAtoPECBAgAABAgQIEAgCAjpgmRIgQIAAAQIECBAQ0H6AAAECBAgQIECAQBAQ0AHLlAABAgQIECBAgICA9gMECBAgQIAAAQIEgoCADlimBAgQIECAAAECBAS0HyBAgAABAgQIECAQBAR0wDIlQIAAAQIECBAgIKD9AAECBAgQIECAAIEgIKADlikBAgQIECBAgAABAe0HCBAgQIAAAQIECAQBAR2wTAkQIECAAAECBAgIaD9AgAABAgQIECBAIAgI6IBlSoAAAQIECBAgQEBA+wECBAgQIECAAAECQUBAByxTAgQIECBAgAABAgLaDxAgQIAAAQIECBAIAgI6YJkSIECAAAECBAgQENB+gAABAgQIECBAgEAQENABy5QAAQIECBAgQICAgPYDBAgQIECAAAECBIKAgA5YpgQIECBAgAABAgQEtB8gQIAAAQIECBAgEAQEdMAyJUCAAAECBAgQICCg/QABAgQIECBAgACBICCgA5YpAQIECBAgQIAAAQHtBwgQIECAAAECBAgEAQEdsEwJECBAgAABAgQICGg/QIAAAQIECBAgQCAICOiAZUqAAAECBAgQIEBAQPsBAgQIECBAgAABAkFAQAcsUwIECBAgQIAAAQIC2g8QIECAAAECBAgQCAICOmCZEiBAgAABAgQIEBDQfoAAAQIECBAgQIBAEBDQAcuUAAECBAgQIECAgID2AwQIECBAgAABAgSCgIAOWKYECBAgQIAAAQIEBLQfIECAAAECBAgQIBAEBHTAMiVAgAABAgQIECAgoP0AAQIECBAgQIAAgSAgoAOWKQECBAgQIECAAAEB7QcIECBAgAABAgQIBAEBHbBMCRAgQIAAAQIECAhoP0CAAAECBAgQIEAgCAjogGVKgAABAgQIECBAQED7AQIECBAgQIAAAQJBQEAHLFMCBAgQIECAAAECAtoPECBAgAABAgQIEAgCAjpgmRIgQIAAAQIECBAQ0H6AAAECBAgQIECAQBAQ0AHLlAABAgQIECBAgICA9gMECBAgQIAAAQIEgoCADlimBAgQIECAAAECBAS0HyBAgAABAgQIECAQBAR0wDIlQIAAAQIECBAgIKD9AAECBAgQIECAAIEgIKADlikBAgQIECBAgAABAe0HCBAgQIAAAQIECAQBAR2wTAkQIECAAAECBAgIaD9AgAABAgQIECBAIAgI6IBlSoAAAQIECBAgQEBA+wECBAgQIECAAAECQUBAByxTAgQIECBAgAABAgLaDxAgQIAAAQIECBAIAgI6YJkSIECAAAECBAgQENB+gAABAgQIECBAgEAQENABy5QAAQIECBAgQICAgPYDBAgQIECAAAECBIKAgA5YpgQIECBAgAABAgQEtB8gQIAAAQIECBAgEAQEdMAyJUCAAAECBAgQICCg/QABAgQIECBAgACBICCgA5YpAQIECBAgQIAAAQHtBwgQIECAAAECBAgEAQEdsEwJECBAgAABAgQICGg/QIAAAQIECBAgQCAICOiAZUqAAAECBAgQIEBAQPsBAgQIECBAgAABAkFAQAcsUwIECBAgQIAAAQIC2g8QIECAAAECBAgQCAICOmCZEiBAgAABAgQIEBDQfoAAAQIECBAgQIBAEBDQAcuUAAECBAgQIECAgID2AwQIECBAgAABAgSCgIAOWKYECBAgQIAAAQIEBLQfIECAAAECBAgQIBAEBHTAMiVAgAABAgQIECAgoP0AAQIECBAgQIAAgSAgoAOWKQECBAgQIECAAAEB7QcIECBAgAABAgQIBAEBHbBMCRAgQIAAAQIECAhoP0CAAAECBAgQIEAgCAjogGVKgAABAgQIECBAQED7AQIECBAgQIAAAQJBQEAHLFMCBAgQIECAAAECAtoPECBAgAABAgQIEAgCAjpgmRIgQIAAAQIECBAQ0H6AAAECBAgQIECAQBAQ0AHLlAABAgQIECBAgICA9gMECBAgQIAAAQIEgoCADlimBAgQIECAAAECBAS0HyBAgAABAgQIECAQBAR0wDIlQIAAAQIECBAgIKD9AAECBAgQIECAAIEgIKADlikBAgQIECBAgAABAe0HCBAgQIAAAQIECAQBAR2wTAkQIECAAAECBAgIaD9AgAABAgQIECBAIAgI6IBlSoAAAQIECBAgQEBA+wECBAgQIECAAAECQUBAByxTAgQIECBAgAABAgLaDxAgQIAAAQIECBAIAgI6YJkSIECAAAECBAgQENB+gAABAgQIECBAgEAQENABy5QAAQIECBAgQICAgPYDBAgQIECAAAECBIKAgA5YpgQIECBAgAABAgQEtB8gQIAAAQIECBAgEAQEdMAyJUCAAAECBAgQICCg/QABAgQIECBAgACBICCgA5YpAQIECBAgQIAAAQHtBwgQIECAAAECBAgEAQEdsEwJECBAgAABAgQICGg/QIAAAQIECBAgQCAICOiAZUqAAAECBAgQIEBAQPsBAgQIECBAgAABAkFAQAcsUwIECBAgQIAAAQIC2g8QIECAAAECBAgQCAICOmCZEiBAgAABAgQIEBDQfoAAAQIECBAgQIBAEBDQAcuUAAECBAgQIECAgID2AwQIECBAgAABAgSCgIAOWKYECBAgQIAAAQIEBLQfIECAAAECBAgQIBAEBHTAMiVAgAABAgQIECAgoP0AAQIECBAgQIAAgSAgoAOWKQECBAgQIECAAAEB7QcIECBAgAABAgQIBAEBHbBMCRAgQIAAAQIECAhoP0CAAAECBAgQIEAgCAjogGVKgAABAgQIECBAQED7AQIECBAgQIAAAQJBQEAHLFMCBAgQIECAAAECAtoPECBAgAABAgQIEAgCAjpgmRIgQIAAAQIECBAQ0H6AAAECBAgQIECAQBAQ0AHLlAABAgQIECBAgICA9gMECBAgQIAAAQIEgoCADlimBAgQIECAAAECBAS0HyBAgAABAgQIECAQBAR0wDIlQIAAAQIECBAgIKD9AAECBAgQIECAAIEgIKADlikBAgQIECBAgAABAe0HCBAgQIAAAQIECAQBAR2wTAkQIECAAAECBAgIaD9AgAABAgQIECBAIAgI6IBlSoAAAQIECBAgQEBA+wECBAgQIECAAAECQUBAByxTAgQIECBAgAABAgLaDxAgQIAAAQIECBAIAgI6YJkSIECAAAECBAgQENB+gAABAgQIECBAgEAQENABy5QAAQIECBAgQICAgPYDBAgQIECAAAECBIKAgA5YpgQIECBAgAABAgQEtB8gQIAAAQIECBAgEAQEdMAyJUCAAAECBAgQICCg/QABAgQIECBAgACBICCgA5YpAQIECBAgQIAAAQHtBwgQIECAAAECBAgEAQEdsEwJECBAgAABAgQICGg/QIAAAQIECBAgQCAICOiAZUqAAAECBAgQIEBAQPsBAgQIECBAgAABAkFAQAcsUwIECBAgQIAAAQIC2g8QIECAAAECBAgQCAICOmCZEiBAgAABAgQIEBDQfoAAAQIECBAgQIBAEBDQAcuUAAECBAgQIECAgID2AwQIECBAgAABAgSCgIAOWKYECBAgQIAAAQIEBLQfIECAAAECBAgQIBAEBHTAMiVAgAABAgQIECAgoP0AAQIECBAgQIAAgSAgoAOWKQECBAgQIECAAAEB7QcIECBAgAABAgQIBAEBHbBMCRAgQIAAAQIECAhoP0CAAAECBAgQIEAgCAjogGVKgAABAgQIECBAQED7AQIECBAgQIAAAQJBQEAHLFMCBAgQIECAAAECAtoPECBAgAABAgQIEAgCAjpgmRIgQIAAAQIECBAQ0H6AAAECBAgQIECAQBAQ0AHLlAABAgQIECBAgICA9gMECBAgQIAAAQIEgoCADlimBAgQIECAAAECBAS0HyBAgAABAgQIECAQBAR0wDIlQIAAAQIECBAgIKD9AAECBAgQIECAAIEgIKADlikBAgQIECBAgAABAe0HCBAgQIAAAQIECAQBAR2wTAkQIECAAAECBAgIaD9AgAABAgQIECBAIAgI6IBlSoAAAQIECBAgQEBA+wECBAgQIECAAAECQUBAByxTAgQIECBAgAABAgLaDxAgQIAAAQIECBAIAgI6YJkSIECAAAECBAgQENB+gAABAgQIECBAgEAQENABy5QAAQIECBAgQICAgPYDBAgQIECAAAECBIKAgA5YpgQIECBAgAABAgQEtB8gQIAAAQIECBAgEAQEdMAyJUCAAAECBAgQICCg/QABAgQIECBAgACBICCgA5YpAQIECBAgQIAAAQHtBwgQIECAAAECBAgEAQEdsEwJECBAgAABAgQICGg/QIAAAQIECBAgQCAICOiAZUqAAAECBAgQIEBAQPsBAgQIECBAgAABAkFAQAcsUwIECBAgQIAAAQIHCpMBaZd4JKEAAAAASUVORK5CYII=Click this to read more about this app.To generate a conversation between multiple personas. 'User' can be included.startUser9999120.7Charles: Hi, I’m Charles Darwin. What would you like to know about me? User: What are some books you have written? Charles: I'm probably most famous for the Origin of Species but I also wrote the Descent of Man. My last book was the The Formation of Vegetable Mould through the Action of Worms.2000The following is a conversation between a highly knowledgeable and intelligent AI assistant, called Charles, and a human user, called User. In the following interactions, Charles answers questions pretending to be Charles Darwin. User and Charles will converse in natural language, and Charles will do its best to answer User's questions. Charles was built to be respectful, polite and inclusive. The conversation begins. 0,1,2,3,4,5,6,7,8,9,a,b,c,d,e,fGalileo Galileifalse01falsefalsefalsefalse