Criado por Jake Kaldenbaugh
mais de 1 ano atrás
|
||
Questão | Responda |
PaLM-E | generalist robotics model from Google which transfers knowledge from visual+language models to robotics. supplements PaLM with ViT-22B which enables raw sensor data |
How do LLMs work? | LLMs represent text mathematically in a way that NNs can process by splitting text into tokes that encode subwords associated with high-dimensional vectors. Matrix mult is then applied to predict the next likely word. |
LaMDA | Language Model for Dialogue Applications: Built on Google's Transformer NN arch (2017), trained on dialogue. Incorporates "sensibleness", what makes sense in coversations. |
Few-Shot vs Zero-Shot Training | Few-shot: using sparse datasets to provide domain and task-specific performance. Implies that fine-tuning may not be required as few shot achieves similar results. |
Bias in GPT-3 | Occupations with higher edu req or phys labor were labled as Male 83% of 388. Race: Asian - high sentiment, Black - low. Religion: |
What is Google Pathways? | An attempt at a single model that can generalize across domains and tasks while being efficient. Pathways SystemL orchestrates dist comp for accelerators. Achieves good few-shot perf across lang+generation tasks |
PaLM API & MakerSuite | API: access to multi-turn, (content, chat) + gnrl purpose (summ, clssfic8n) MakerSuite: simplifies AI dev workflows - Tune model, synth data aug, genr8 embeddings, Resp+Sfty, Deploy & Scale |
PEPT | Parameter Efficient Prompt-Tuning: ft with a small dataset done in the prompt |
Chain of Thought Prompting | Prompt the model to break down its reasoning into steps "show work", creating more structured, organized and accurate responses. Most benefiicial for complex math & science. |
Computer Vision Trends | in 2020 began moving from Convolution NNs to Transformers arch. |
GANs | Generative Adversarial Networks: set up two opposing models: generator+discriminator, use results to improve ability to "win" |
How do Diffusion Models Work? | Systematically/slowly destroy structure in a data distribution through iterative fwd diffusion, the learn reverse diffusion to restore data. |
CALM | Confident Adaptive Language Modeling: accelerating LM text generation by improving efficiency at inference time because some word predictions are higher prob than others. CLAM dynamically distributes the computational effort across gen timesteps |
In-Context Learning (Few-Shot Prompting) | Ability for LLMs to do tasks after seeing a few examples. LLMs contain smaller simpler linear models based on examples within massive training sets. NNs perhaps have internal ML models. |
Quer criar seus próprios Flashcards gratuitos com GoConqr? Saiba mais.