zeroshotlearning.ai


#Zero Shot Learning AI Meta


#Zero Shot Learning (ZSL) | Allowing systems to identify and categorize new items without needing any prior examples


#Zero shot object detection | System can recognize objects based on their descriptive features instead of depending on labeled data


#Visual Intelligence Platform


#Generalized Zero Shot Learning (GSZL) | Recognizing new classes only by examining their descriptions | Helping AI systems swiftly process new data in real-world circumstances, making them more scalable


#Visual Data Management


#SLAM | Simultaneous Localization and Mapping


#Robotic Perception | Acquiring knowledge from sensor data


#Small Object Detection


#Convnet Based Object Detection


#Deep Learning For Object Detection


#Moving Object Trajectory Prediction


#Ship Detection


#Ship Classification


#Neural Network For Object Detection


#Fast Object Detection


#Road Crack Detection


#Object Detection For Avoidance


#Generic Object Detection


#Surface Object Detection


#Moving Object Detection


#Situations where getting labeled data is hard


#Studying rare diseases


#Studying newly discovered species


#Language processing


#Letting machines adapt quickly without tons of extra training


#Computational biology


#Machine learning models requiring zero examples (shots) of class they need to recognize


#Semantic Embedding Space | Representing both seen and unseen classes in common space | Word vectors | Semantic descriptions


#Training model on seen classes, incorporating relationships with unseen classes


#Inferencing by relating unseen class to semantic embedding space and predicting class based on its relationship to seen classes


#Recognizing unseen


#Understanding specific visual patterns


#Predicting classes without ever having seen images during training


#Allowing models to make predictions without needing examples of every class


#Ambiguity in Unseen Classes | Handling unseen classes that may be similar to multiple seen classes


#Creating machines that learn more like humans


#Developing more flexible and resource-efficient models


#Synthetic data generation (SDG)


#Model training


#Spatial-temporal graph convolutional network (ST-GCN)


#Programming new actions for robots


#Position and orientation randomization for characters, agents, and objects


#Generating synthetic data on human characters and robots


#Creating action recognition model


#Simulating and validating robots, for multiple domains


#Creating diverse scenes


#Recognizing range of actions across different scenarios


#Action Recognition Model


#Field Foundation Model (FFMs) | Physical world model using sensor data as an input | Field AI robots can understand how to move in world, rather than just where to move | Very heavy probabilistic modeling | World modeling becomes by-product of Field AI.robots operating in the world rather than prerequisite for that operation | Aim is to just deploy robot, with no training time needed | Autonomous robotic systems applucations | Field AI is software company making sensor payloads that integrate with their autonomy software | Autonomous humanoid Field AI can do | Focus on platforms that are more affordable | Integrating mobility with high-level planning, decision making, and mission execution | Potential to take advantage of relatively inexpensive robots is what is going to make the biggest difference toward Field AI commercial success


#Large Language Model (LLM) | Foundational LLM: ex Wikipedia in all its languages fed to LLM one word at a time | LLM is trained to predict the next word most likely to appear in that context | LLM intellugence is based on its ability to predict what comes next in a sentence | LLMs are amazing artifacts, containing a model of all of language, on a scale no human could conceive or visualize | LLMs do not apply any value to information, or truthfulness of sentences and paragraphs they have learned to produce | LLMs are powerful pattern-matching machines but lack human-like understanding, common sense, or ethical reasoning | LLMs produce merely a statistically probable sequence of words based on their training | LLMs are very good at summarizing | Inappropriate use of LLMs as search engines has produced lots of unhappy results | LLM output follows path of most likely words and assembles them into sentences | Pathological liars as a source for information | Incredibly good at turning pre-existing information into words | Give them facts and let them explain or impart them


#Retrieval Augmented Generation. (RAG LLM) | Designed for answering queries in a specific subject, for example, how to operate a particular appliance, tool, or type of machinery | LLM takes as much textual information about subject, user manuals and then pre-process it into small chunks containing few specific facts | When user asks question, software system identifies chunk of text which is most likely to contain answer | Question and answer are then fed to LLM, which generates human-language answer in response to query | Enforcing factualness on LLMs