Projects
Artificial Intelligence
(1) LMs
- The Rhythm In Anything: https://oreillyp.github.io/tria/
- DriveDreamer4D: https://drivedreamer4d.github.io/
- TokenFormer: https://haiyang-w.github.io/tokenformer.github.io/
- hertz-dev: https://si.inc/hertz-dev/
- PointLLM: https://runsenxu.com/projects/PointLLM/
- Aria: https://www.rhymes.ai/blog-details/aria-first-open-multimodal-native-moe-model
- SiT: https://scalable-interpolant.github.io/
- Rhymes AI: https://rhymes.ai/
- nemotron-4-340b-instruct: https://build.nvidia.com/nvidia/nemotron-4-340b-instruct
- EgoLM: https://hongfz16.github.io/projects/EgoLM
- Small Language Models (SLM): https://www.jetson-ai-lab.com/tutorial_slm.html
- MiniMind: https://jingyaogong.github.io/minimind/
- ell: https://docs.ell.so/#
- CogVLM2-Video: https://cogvlm2-video.github.io/
- InternVL: https://internvl.github.io/
- Chroma: https://generatebiomedicines.com/chroma
- GameNGen: https://gamengen.github.io/
- Husky: https://agent-husky.github.io/
- Sapiens: https://about.meta.com/realitylabs/codecavatars/sapiens
- MeshFormer: https://meshformer3d.github.io/
- Genie: https://sites.google.com/view/genie-2024/home
- Llama Tour: https://llamatutor.together.ai/
- CogVideo: https://cogvideo.pka.moe/
- Awesome ChatGPT Prompts: https://prompts.chat/
- Opening up ChatGPT: https://opening-up-chatgpt.github.io/
- AI Home Tab: https://aihometab.com/
- Groq: https://groq.com/
- NextChat: https://nextchat.dev/
- MaxKB: https://maxkb.cn/
- OpenDatalab: https://opendatalab.com/OpenSourceTools
- Segment Anything Model 2 (SAM 2): https://ai.meta.com/sam2/
- LLaMA-Factory: https://qwen.readthedocs.io/en/latest/training/SFT/llama_factory.html
- FunAudioLLM: https://funaudiollm.github.io/
- KLING: https://kling.kuaishou.com/
- AlphaGeometry: https://deepmind.google/discover/blog/alphageometry-an-olympiad-level-ai-system-for-geometry/
- Prompt Engineering Guide: https://www.promptingguide.ai/
- AskManyAI: https://askmanyai.cn/login
- Meet PathChat 2: https://www.modella.ai/intro.html
- PaintsUndo: https://lllyasviel.github.io/pages/paints_undo/
- Artificial Analysis: https://artificialanalysis.ai/
- Transformer Explainer: https://poloclub.github.io/transformer-explainer/
- Moshi: https://moshi.chat/?queue_id=talktomoshi
- AI Graveyard: https://dang.ai/ai-graveyard
- GraphRAG: https://microsoft.github.io/graphrag/
- Luma Dream Machine: https://lumalabs.ai/dream-machine
- Open-Sora: https://hpcaitech.github.io/Open-Sora/
- Cambrian-1: https://cambrian-mllm.github.io/
- PaLM-E: https://palm-e.github.io/
- Meta Prompting for AI Systems: https://meta-prompting.github.io/
- SITUATIONAL AWARENESS: https://situational-awareness.ai/
- ChatTTS: https://chattts.com/
- KAN: https://kindxiaoming.github.io/pykan/
- LLM Visualization: https://bbycroft.net/llm
- MLX: https://ml-explore.github.io/mlx/build/html/index.html
- COSTAR Prompt Engineering: https://medium.com/@frugalzentennial/unlocking-the-power-of-costar-prompt-engineering-a-guide-and-example-on-converting-goals-into-dc5751ce9875
- Cohere: https://cohere.com/
- DeepSpeed: https://www.deepspeed.ai/
- AI Mind: https://www.aimind.so/
- flowith: https://flowith.io/conv/51c05bc8-92f3-4644-869d-35fd22dbfb71
- LeanDojo: https://leandojo.org/
- GPT4All: https://www.nomic.ai/gpt4all
- MiniGPT4-Video: https://vision-cair.github.io/MiniGPT4-video/
- Mira: https://mira-space.github.io/
- ModelScope: https://www.modelscope.cn/home
- Osmo: https://www.osmo.ai/
- Hume: https://www.hume.ai/
- Reka: https://www.reka.ai/
- Suno: https://suno.com/
- Kimi: https://kimi.moonshot.cn/
- SeamlessM4T: https://ai.meta.com/blog/seamless-m4t/
- ElevenLabs: https://elevenlabs.io/
- ChatLaw: https://chatlaw.cloud/
- Gemma: https://ai.google.dev/gemma?hl=zh-cn
- Stable Diffusion Art: https://stable-diffusion-art.com/comfyui/
- OpenRouter: https://openrouter.ai/
- ChatRTX: https://www.nvidia.com/en-us/ai-on-rtx/chatrtx/
- Runway AI: https://runwayml.com/
- Stable Video: https://www.stablevideo.com/welcome
- Stability AI: https://stability.ai/
- Multi-Agent Transformer: https://sites.google.com/view/multi-agent-transformer
- DreamerV3: https://danijar.com/project/dreamerv3/
- Imagen 2: https://deepmind.google/technologies/imagen-2/
- Gemini Models: https://deepmind.google/technologies/gemini/#introduction
- Anthropic AI: https://www.anthropic.com/
- Grok: https://x.ai/
- LLaVA: https://llava-vl.github.io/
- Speaking AI: https://speaking.ai/
- GPT-4 Is Too Smart To Be Safe: https://llmcipherchat.github.io/
- Character AI: https://character.ai/
- Whisper: https://openai.com/index/whisper/
- Imagica AI: https://www.imagica.ai/
- LangGPT: https://community.openai.com/t/langgpt-empowering-everyone-to-become-a-prompt-expert/207880
- vLLM: https://docs.vllm.ai/en/latest/
- DeepAI: https://deepai.org/
- Midjourney: https://www.midjourney.com/home
- Beautiful AI: https://www.beautiful.ai/
- Qwen2.5-Turbo: https://qwen2.org/qwen2-5-turbo/
(2) Agents
- UFO: https://github.com/microsoft/UFO
- ell: https://github.com/MadcowD/ell
- SWE-agent: https://github.com/princeton-nlp/SWE-agent
- autogen: https://github.com/microsoft/autogen
- CharacterGen: https://github.com/zjp-shadow/CharacterGen
- Qwen2.5-Coder: https://qwenlm.github.io/zh/blog/qwen2.5-coder/
- Skyvern: https://www.skyvern.com/
- Ichigo: https://github.com/homebrewltd/ichigo
- AI for Grant Writing: https://www.lizseckel.com/ai-for-grant-writing/
- Large Language Model Agents: https://llmagents-learning.org/f24
- OpenAGI: https://openagi.aiplanet.com/
- HyperWrite: https://www.hyperwriteai.com/
- FastGPT: https://tryfastgpt.ai/
- ADAS: https://www.shengranhu.com/ADAS/
- AI Town: https://www.convex.dev/ai-town
- Comflowy: https://www.comflowy.com/
- Altera: https://altera.al/
- Firecrawl: https://www.firecrawl.dev/
- Artificial LIfe ENvironment (ALIEN): https://www.alien-project.org/index.html
- AutoGen Studio 2.0: https://autogen-studio.com/
- SuperCraft: https://supercraft.ai/
- Pipecat: https://www.pipecat.ai/
- Neo4j: https://neo4j.com/labs/genai-ecosystem/llm-graph-builder/
- Lumina: https://www.lumina.sh/c5bbe32b-4fb7-476a-81aa-fe269f67f283?ref=www.lumina-chat.com
- RAGFlow: https://ragflow.io/
- OmniParse: https://docs.cognitivelab.in/
- Supermemory: https://supermemory.ai/
- MindSearch: https://mindsearch.netlify.app/
- ChatDev: https://chatdev.toscl.com/
- LlamaCoder: https://llamacoder.together.ai/
- GPTs Works: https://gpts.works/
- V2A-Mapper: https://v2a-mapper.github.io/
- MultiOn: https://www.multion.ai/
- ThinkAny: https://thinkany.ai/zh
- Mem0: https://docs.mem0.ai/overview
- Cradle: https://baai-agents.github.io/Cradle/
- Devv: https://devv.ai/zh
- Co-STORM: https://storm.genie.stanford.edu/
- Bubble: https://bubble.io/
- Humanize AI text: https://www.humanizeai.pro/
- Aider: https://aider.chat/
- Agently AI: https://agently.tech/
- DeepSeek Coder: https://deepseekcoder.github.io/
- AgentScope: https://doc.agentscope.io/en/index.html
- CrewAI: https://docs.crewai.com/introduction
- Humaan AI: https://humaan.ai/
- FlowiseAI: https://flowiseai.com/
- Chainlit: https://docs.chainlit.io/get-started/overview
- Phidata: https://docs.phidata.com/agents
- Lepton AI: https://www.lepton.ai/
- AutoGPT: https://agpt.co/
- MetaGPT: https://www.deepwisdom.ai/
- LangGraph: https://langchain-ai.github.io/langgraph/
- HyperWrite: https://www.hyperwriteai.com/
- QAnything: https://qanything.ai/
- Synthflow AI Voice Assistants: https://synthflow.ai/
- Tavily: https://tavily.com/
- Dify: https://dify.ai/
- LangChain: https://www.langchain.com/
- LlamaIndex: https://www.llamaindex.ai/
- SWE-agent: https://swe-agent.com/
- MemGPT: https://memgpt.ai/
- SIMA: https://deepmind.google/discover/blog/sima-generalist-ai-agent-for-3d-virtual-environments/
- Durable: https://durable.co/
- Cognition: https://www.cognition.ai/
- LTX Studio: https://ltx.studio/
- vellum: https://www.vellum.ai/
- DetectGPT: https://detectgpt.ai/index.html
- Dora AI: https://www.dora.run/ai
- An Embodied Generalist Agent in 3D World: https://embodied-generalist.github.io/
- Voyager: https://voyager.minedojo.org/
- XAgent: https://xagent-doc.readthedocs.io/en/latest/
- SuperAGI: https://superagi.com/
- ima.copilot: https://ima.qq.com/
- Supermaven: https://supermaven.com/
- Accio: https://www.accio.com/
- excalidraw: https://excalidraw.com/
- Tencent Yuanqi: https://yuanqi.tencent.com/agent-shop
(3) AIGC
- MikuDance: https://kebii.github.io/MikuDance/
- DanceFusion: https://th-mlab.github.io/DanceFusion/
- StdGEN: https://stdgen.github.io/
- URAvatar: https://junxuan-li.github.io/urgca-website/
- Towards High-fidelity Head Blending with Chroma Keying for Industrial Applications: https://hahminlew.github.io/changer/
- ReCapture: https://generative-video-camera-controls.github.io/
- DimensionX: https://chenshuo20.github.io/DimensionX/
- Fashion-VDM: https://johannakarras.github.io/Fashion-VDM/
- X-Portrait 2: https://byteaigc.github.io/X-Portrait2/
- GameGen-X: https://gamegen-x.github.io/
- HelloMeme: https://songkey.github.io/hellomeme/
- DreamVideo-2: https://dreamvideo2.github.io/
- VidPanos: https://vidpanos.github.io/
- DAWN: https://hanbo-cheng.github.io/DAWN/
- Interstice: https://www.interstice.cloud/
- LongVU: https://vision-cair.github.io/LongVU/
- DreamCraft3D++: https://dreamcraft3dplus.github.io/
- Hallo2: https://fudan-generative-vision.github.io/hallo2/#/
- MVideo: https://mvideo-v1.github.io/
- UniMuMo: https://hanyangclarence.github.io/unimumo_demo/
- TextToon: https://songluchuan.github.io/TextToon/
- eye-contact-correction: https://www.sievedata.com/functions/sieve/eye-contact-correction
- TANGO: https://pantomatrix.github.io/TANGO/
- Animate-X: https://lucaria-academy.github.io/Animate-X/
- ACE: https://ali-vilab.github.io/ace-page/
- PhysGen: https://stevenlsw.github.io/physgen/
- EdgeRunner: https://research.nvidia.com/labs/dir/edgerunner/
- Movie Gen: https://ai.meta.com/research/movie-gen/
- Inverse Painting: https://inversepainting.github.io/
- Disco4D: https://disco-4d.github.io/
- MimicTalk: https://mimictalk.github.io/
- DIAMOND: https://diamond-wm.github.io/
- JoyHallo: https://jdh-algo.github.io/JoyHallo/
- SF3D: https://stable-fast-3d.github.io/
- GaussianCube: https://gaussiancube.github.io/
- Synchronize Dual Hands for Physics-Based Dexterous Guitar Playing: https://pei-xu.github.io/guitar
- SPARK: https://kelianb.github.io/SPARK/
- LVCD: https://luckyhzt.github.io/lvcd
- PortraitGen: https://ustc3dv.github.io/PortraitGen/
- NeRF: https://www.matthewtancik.com/nerf
- HIT: https://hit.is.tue.mpg.de/#video
- 3DTopia-XL: https://3dtopia.github.io/3DTopia-XL/
- DrawingSpinUp: https://lordliang.github.io/DrawingSpinUp/
- Robust Dual Gaussian Splatting for Immersive Human-centric Volumetric Videos: https://nowheretrix.github.io/DualGS/
- PoseTalk: https://junleen.github.io/projects/posetalk/
- GVHMR: https://zju3dv.github.io/gvhmr/
- PersonaTalk: https://grisoon.github.io/PersonaTalk/
- EscherNet: https://kxhit.github.io/EscherNet
- Gaussian Garments: https://ribosome-rbx.github.io/Gaussian-Garments/
- CyberHost: https://cyberhost.github.io/
- Draw an Audio: https://yannqi.github.io/Draw-an-Audio/
- 3DGRT: https://gaussiantracer.github.io/
- ViewCrafter: https://drexubery.github.io/ViewCrafter/
- ReconX: https://liuff19.github.io/ReconX/
- FaceSwap: https://faceswap.so/
- Civitai: https://civitai.com/
- Loopy: https://loopyavatar.github.io/
- PhotoMaker: https://huggingface.co/spaces/TencentARC/PhotoMaker
- InterTrack: https://virtualhumans.mpi-inf.mpg.de/InterTrack/
- Build-A-Scene: https://abdo-eldesokey.github.io/build-a-scene/
- MagicMan: https://thuhcsi.github.io/MagicMan/
- LayerPano3D: https://ys-imtech.github.io/projects/LayerPano3D/
- DreamCinema: https://liuff19.github.io/DreamCinema/
- TurboEdit: https://betterze.github.io/TurboEdit/
- Scaling Up Dynamic Human-Scene Interaction Modeling: https://jnnan.github.io/trumans/
- DiPIR: https://research.nvidia.com/labs/toronto-ai/DiPIR/
- TurboEdit: https://turboedit-paper.github.io/
- DEGAS: https://initialneil.github.io/DEGAS
- Audio Match Cutting: https://denfed.github.io/audiomatchcut/
- Subsurface Scattering for Gaussian Splatting: https://sss.jdihlmann.com/
- Tavus: https://www.tavus.io/
- Media2Face: https://sites.google.com/view/media2face
- FruitNeRF: https://meyerls.github.io/fruit_nerf/
- Puppet-Master: https://vgg-puppetmaster.github.io/
- ernerf: https://zhuanlan.zhihu.com/p/675131165
- An Object is Worth 64x64 Pixels: https://omages.github.io/
- VideoDoodles: https://em-yu.github.io/research/videodoodles/
- ReSyncer: https://guanjz20.github.io/projects/ReSyncer/
- KEEP: https://jnjaby.github.io/projects/KEEP/
- MoMask: https://ericguo5513.github.io/momask/
- Tora: https://ali-videoai.github.io/tora_video/
- EmoTalk3D: https://nju-3dv.github.io/projects/EmoTalk3D/
- Cycle3D: https://pku-yuangroup.github.io/Cycle3D/
- Swapface: https://www.swapface.org/#/home
- ExAvatar: https://mks0601.github.io/ExAvatar/
- MotionClone: https://bujiazi.github.io/motionclone.github.io/
- Outfit Anyone: https://humanaigc.github.io/outfit-anyone/
- Vidu: https://www.vidu.studio/zh
- Temporal Residual Jacobians for Rig-free Motion Transfer: https://temporaljacobians.github.io/
- Cinemo: https://maxin-cn.github.io/cinemo_project/
- HumanVid: https://humanvid.github.io/
- Diffree: https://opengvlab.github.io/Diffree/
- SMooDi: https://neu-vi.github.io/SMooDi/
- Lite2Relight: https://vcai.mpi-inf.mpg.de/projects/Lite2Relight/
- *Noise Calibration: https://yangqy1110.github.io/NC-SDEdit/
- Diff-Foley: https://diff-foley.github.io/
- Masked Generative Video-to-Audio Transformers with Enhanced Synchronicity: https://maskvat.github.io/
- vozo: https://www.vozo.ai/
- MoA: https://snap-research.github.io/mixture-of-attention/
- Magic Insert: https://magicinsert.github.io/
- CharacterGen: https://charactergen.github.io/
- Live2Diff: https://live2diff.github.io/
- RodinHD: https://rodinhd.github.io/
- StickerBaker: https://stickerbaker.com/
- Still-Moving: https://still-moving.github.io/
- Tripo3D: https://www.tripo3d.ai/
- Hedra: https://www.hedra.com/
- RenderNet: https://rendernet.ai/index.html
- LivePortrait: https://liveportrait.github.io/
- Image Conductor: https://liyaowei-stu.github.io/project/ImageConductor/
- MOTIA: https://be-your-outpainter.github.io/
- Meta 3D AssetGen: https://assetgen.github.io/
- Portrait3D: https://jinkun-hao.github.io/Portrait3D/
- GaussianDreamerPro: https://taoranyi.com/gaussiandreamerpro/
- MimicMotion: https://tencent.github.io/MimicMotion/
- Text-Animator: https://laulampaul.github.io/text-animator.html
- YouDream: https://youdream3d.github.io/
- FoleyCrafter: https://foleycrafter.github.io/
- Wonder Studio: https://wonderdynamics.com/
- TripoSR: https://stability.ai/news/triposr-3d-generation
- MeshAnything: https://buaacyw.github.io/mesh-anything/
- EvTexture: https://dachunkai.github.io/evtexture.github.io/
- ScoreHypo: https://xy02-05.github.io/ScoreHypo/
- AniFusion: https://anifusion.ai/
- Style-NeRF2NeRF: https://haruolabs.github.io/style-n2n/
- 4K4DGen: https://4k4dgen.github.io/
- ExVideo: https://ecnu-cilab.github.io/ExVideoProjectPage/
- Diffutoon: https://ecnu-cilab.github.io/DiffutoonProjectPage/
- Holistic-Motion2D: https://holistic-motion2d.github.io/
- FaceFusion: https://docs.facefusion.io/
- AnyFit: https://colorful-liyu.github.io/anyfit-page/
- PuzzleFusion++: https://puzzlefusion-plusplus.github.io/
- UniAnimate: https://unianimate.github.io/
- ChronoDepth: https://jhaoshao.github.io/ChronoDepth/
- Unique3D: https://wukailu.github.io/Unique3D/
- GECO: https://cwchenwang.github.io/geco/
- T2V-Turbo: https://t2v-turbo.github.io/
- EasyAnimate: https://easyanimate.github.io/
- ZeroSmooth: https://ssyang2020.github.io/zerosmooth.github.io/
- MVSGaussian: https://mvsgaussian.github.io/
- CityGaussian: https://dekuliutesla.github.io/citygs/
- MOFA-Video: https://myniuuu.github.io/MOFA_Video/
- VividDream: https://vivid-dream-4d.github.io/
- Motion2VecSets: https://vveicao.github.io/projects/Motion2VecSets/
- MultiPly: https://eth-ait.github.io/MultiPly/
- Neural Gaffer: https://neural-gaffer.github.io/
- I4VGen: https://xiefan-guo.github.io/i4vgen/
- ToonCrafter: https://doubiiu.github.io/projects/ToonCrafter/
- 2DGS: https://surfsplatting.github.io/
- Collaborative Video Diffusion: https://collaborativevideodiffusion.github.io/
- Looking Backward: https://jeff-liangf.github.io/projects/streamv2v/
- SadTalker: https://sadtalker.github.io/
- VividTalk: https://humanaigc.github.io/vivid-talk/
- I2VEdit: https://i2vedit.github.io/
- MagicPose4D: https://boese0601.github.io/magicpose4d/
- Generative Camera Dolly: https://gcd.cs.columbia.edu/
- ReVideo: https://mc-e.github.io/project/ReVideo/
- Text-to-Vector Generation with Neural Path Representation: https://intchous.github.io/T2V-NPR/
- CAT3D: https://cat3d.github.io/
- StructLDM: https://taohuumd.github.io/projects/StructLDM/
- AniTalker: https://x-lance.github.io/AniTalker/
- Dual3D: https://dual3d.github.io/
- X-Oscar: https://xmu-xiaoma666.github.io/Projects/X-Oscar/
- HiDiffusion: https://hidiffusion.github.io/
- Tunnel Try-on: https://mengtingchen.github.io/tunnel-try-on-page/
- STAG4D: https://nju-3dv.github.io/projects/STAG4D/
- GS-LRM: https://sai-bi.github.io/project/gs-lrm/
- GScream: https://w-ted.github.io/publications/gscream/
- MotionMaster: https://sjtuplayer.github.io/projects/MotionMaster/
- PhysDreamer: https://physdreamer.github.io/
- SwapAnything: https://swap-anything.github.io/
- MagicPose: https://boese0601.github.io/magicdance/
- ZeST: https://ttchengab.github.io/zest/
- EMOPortraits: https://neeek2303.github.io/EMOPortraits/
- StoryDiffusion: https://storydiffusion.github.io/
- Automatic Controllable Colorization via Imagination: https://xy-cong.github.io/imagine-colorization/
- MaPa: https://zhanghe3z.github.io/MaPa/
- EMO: https://humanaigc.github.io/emote-portrait-alive/
- StreamingT2V: https://streamingt2v.github.io/
- IDM-VTON: https://idm-vton.github.io/
- IntrinsicAnything: https://zju3dv.github.io/IntrinsicAnything/
- Interactive3D: https://interactive-3d.github.io/
- in2IN: https://pabloruizponce.github.io/in2IN/
- synthesia: https://www.synthesia.io/
- DreamWalk: https://mshu1.github.io/dreamwalk.github.io/
- MagicTime: https://pku-yuangroup.github.io/MagicTime/
- Gaussian Head Avatar: https://yuelangx.github.io/gaussianheadavatar/
- Champ: https://fudan-generative-vision.github.io/champ/#/
- ObjectDrop: https://objectdrop.github.io/
- HeyGen: https://www.heygen.com/
- DomoAI: https://domoai.app/
- Pebblely: https://pebblely.com/
- Photorealistic Video Generation with Diffusion Models: https://walt-video-diffusion.github.io/
- 3D-GPT: https://chuny1.github.io/3DGPT/3dgpt.html
- SynthID: https://deepmind.google/technologies/synthid/
- Palette: https://palette.fm/
- Upscayl: https://upscayl.org/
- CLIP-NeRF: https://cassiepython.github.io/clipnerf/
- Scaling up GANs for Text-to-Image Synthesis: https://mingukkang.github.io/GigaGAN/
- CoDeF: https://qiuyu96.github.io/CoDeF/
- MVideo: https://mvideo-v1.github.io/
- edify-3d: https://build.nvidia.com/shutterstock/edify-3d
- JoyVASA: https://jdh-algo.github.io/JoyVASA/
- EchoMimicV1: https://antgroup.github.io/ai/echomimic/
- EchoMimicV2: https://antgroup.github.io/ai/echomimic_v2/
Robotics
(1) Hardware
- Zeroth: https://docs.zeroth.bot/
- Power-over-Skin: https://www.figlab.com/research/2024/poweroverskin
- RoboDuet: https://locomanip-duet.github.io/
- The snake that saves lives: https://ethz.ch/en/news-and-events/eth-news/news/2024/11/the-snake-that-saves-lives.html
- XGO-Rider: https://www.kickstarter.com/projects/xgorobot/xgo-rider-desktop-two-wheel-legged-robot-with-ai
- 7X: https://7xr.tech/
- Berkeley Humanoid: https://berkeley-humanoid.com/
- Torobo: https://robotics.tokyo/products/torobo/
- DexHand: https://www.dexhand.org/
- NAVER LABS: https://www.naverlabs.com/
- Surena Humanoid Robot: https://surenahumanoid.com/
- NEO Home Humanoid: https://www.1x.tech/
- Digit - Dexterous Manipulation and Touch Perception: https://digit.ml/
(2) Software
- ROS2: https://docs.ros.org/en/jazzy/index.html
- SAFER-Splat: https://chengine.github.io/safer-splat/
- Neural MP: https://mihdalal.github.io/neuralmotionplanner/
- AirSLAM: https://xukuanhit.github.io/airslam/
- SimTK: https://simtk.org/
- MyoSuite: https://sites.google.com/view/myosuite/myosuite
- Hyfydy: https://hyfydy.com/
- LVCP: https://sites.google.com/view/lvcp
- Hello: https://www.hello-algo.com/
- Skild AI: https://www.skild.ai/
- NVIDIA Project GR00T: https://developer.nvidia.com/project-gr00t
- OpenWorm: https://openworm.org/
- NVIDIA Isaac ROS: https://nvidia-isaac-ros.github.io/
- VehicleSim: https://www.carsim.com/
AIRobotics
(1) Robot Learning
- UMI(Universal Manipulation Interface): https://umi-gripper.github.io/
- PSAG: https://www.jianrenw.com/PSAG/
- Identifying Terrain Physical Parameters from Vision: https://leggedrobotics.github.io/identifying_terrain_physical_parameters_webpage/
- RGBManip: https://rgbmanip.github.io/
- RoboStudio: https://robostudioapp.com/
- ReKep: https://rekep-robot.github.io/
- DeformGS: https://deformgs.github.io/
- NeuralFeels: https://suddhu.github.io/neural-feels/
- ALOHA: https://tonyzhaozh.github.io/aloha/
- LucidSim: https://lucidsim.github.io/
- RoPotter: https://robot-pottery.github.io/
- HOVER: https://hover-versatile-humanoid.github.io/
- DexMimicGen: https://dexmimicgen.github.io/
- Eurekaverse: https://eureka-research.github.io/eurekaverse/
- HIL-SERL: https://hil-serl.github.io/
- OrbitGrasp: https://orbitgrasp.github.io/
- 3D-ViTac: https://binghao-huang.github.io/3D-ViTac/
- Physical Intelligence: https://www.physicalintelligence.company/blog/pi0
- HuDOR: https://object-rewards.github.io/
- LAPA: https://latentactionpretraining.github.io/
- ManipGen: https://mihdalal.github.io/manipgen/
- Robots Pre-Train Robots: https://robots-pretrain-robots.github.io/
- ARNOLD: https://arnold-benchmark.github.io/
- GPT-4V(ision) for Robotics: https://microsoft.github.io/GPT4Vision-Robot-Manipulation-Prompts/
- VoxAct-B: https://voxact-b.github.io/
- ARCap: https://stanford-tml.github.io/ARCap/
- Harmon: https://ut-austin-rpl.github.io/Harmon/
- Data Scaling Laws: https://data-scaling-laws.github.io/
- Dynamic 3D Gaussian Tracking: https://gs-dynamics.github.io/
- OKAMI: https://ut-austin-rpl.github.io/OKAMI/
- UniHSI: https://xizaoqu.github.io/unihsi/
- SDS: https://rpl-cs-ucl.github.io/SDSweb/
- EgoAllo: https://egoallo.github.io/
- DART: https://zkf1997.github.io/DART/
- FürElise: https://for-elise.github.io/
- PourIt: https://hetolin.github.io/PourIt/
- Cherrybot: https://goodcherrybot.github.io/
- AnyCar to Anywhere: https://lecar-lab.github.io/anycar/
- Learning Smooth Humanoid Locomotion through Lipschitz-Constrained Policies: https://lipschitz-constrained-policy.github.io/
- HumanoidOlympics: https://humanoidolympics.github.io/
- OmniH2O: https://omni.human2humanoid.com/
- Continuously Improving Mobile Manipulation with Autonomous Real-World RL: https://continual-mobile-manip.github.io/
- MotIF: https://motif-1k.github.io/
- Helpful DoggyBot: https://helpful-doggybot.github.io/
- Blox-Net: https://bloxnet.org/
- GR-MG: https://gr-mg.github.io/
- Real-World Cooking Robot System from Recipes: https://kanazawanaoaki.github.io/cook-from-recipe-pddl/
- Unleashing Large-Scale Video Generative Pre-training for Visual Robot Manipulation: https://gr1-manipulation.github.io/
- GR-2: https://gr2-manipulation.github.io/
- CLoSD: https://guytevet.github.io/CLoSD-page/
- RDT-1B: https://rdt-robotics.github.io/rdt-robotics/
- Agile Continuous Jumping in Discontinuous Terrains: https://yxyang.github.io/jumping_cod/
- Humanoid Manipulation: https://humanoid-manipulation.github.io/
- Diff-Control: https://diff-control.github.io/
- Catch It: https://mobile-dex-catch.github.io/
- Robot See Robot Do: https://robot-see-robot-do.github.io/
- Gen2Act: https://homangab.github.io/gen2act/
- Full-Order Sampling-Based MPC for Torque-Level Locomotion Control via Diffusion-Style Annealing: https://lecar-lab.github.io/dial-mpc/
- ReMEmbR: https://nvidia-ai-iot.github.io/remembr/
- ReMEmbR: https://developer.nvidia.com/blog/using-generative-ai-to-enable-robots-to-reason-and-act-with-remembr/
- MaskedMimic: https://research.nvidia.com/labs/par/maskedmimic/
- Learning Human-to-Humanoid Real-Time Whole-Body Teleoperation: https://human2humanoid.com/
- Theia: https://theia.theaiinstitute.com/
- Robot Motion Diffusion Model: Motion Generation for Robotic Characters: https://la.disneyresearch.com/publication/robot-motion-diffusion-model-motion-generation-for-robotic-characters/
- Identifying Terrain Physical Parameters from Vision: https://leggedrobotics.github.io/identifying_terrain_physical_parameters_webpage/
- One-shot Video Imitation via Parameterized Symbolic Abstraction Graphs: https://www.jianrenw.com/PSAG/
- RGBManip: https://rgbmanip.github.io/
- ALOHA Unleashed: https://aloha-unleashed.github.io/
- PianoMime: https://pianomime.github.io/
- Robot Utility Models: https://robotutilitymodels.com/#
- Polaris: https://star-uu-wang.github.io/Polaris/
- RoboStudio: https://robostudioapp.com/
- ICRT: https://icrt.dev/
- SkillMimic: https://ingrid789.github.io/SkillMimic/
- VoicePilot: https://sites.google.com/andrew.cmu.edu/voicepilot/
- ReKep: https://rekep-robot.github.io/
- DeformGS: https://deformgs.github.io/
- ATM: https://xingyu-lin.github.io/atm/
- Universal Manipulation Interface: https://umi-gripper.github.io/
- ACE: https://ace-teleop.github.io/
- Summarize the Past to Predict the Future: https://eth-ait.github.io/transfusion-proj/
- TacSL: https://iakinola23.github.io/tacsl/
- RoCo: https://project-roco.github.io/
- UniT: https://zhengtongxu.github.io/unifiedtactile.github.io/
- PhysHOI: https://wyhuai.github.io/physhoi-page/
- UMI on Legs: https://umi-on-legs.github.io/
- Lifelike Agility and Play in Quadrupedal Robots: https://tencent-roboticsx.github.io/lifelike-agility-and-play/
- RoboCasa: https://robocasa.ai/
- GET-Zero: https://get-zero-paper.github.io/
- DextrAH-G: https://sites.google.com/view/dextrah-g
- Surgical Robot Transformer: https://surgical-robot-transformer.github.io/
- Grasping Diverse Objects with Simulated Humanoids: https://www.zhengyiluo.com/Omnigrasp-Site/
- PoliFormer: https://poliformer.allen.ai/
- This&That: https://cfeng16.github.io/this-and-that/
- RoboCat: https://deepmind.google/discover/blog/robocat-a-self-improving-robotic-agent/
- RoboGen: https://robogen-ai.github.io/
- EquiBot: https://equi-bot.github.io/
- Policy Composition From and For Heterogeneous Robot Learning: https://liruiw.github.io/policycomp/
- GenSim: https://gen-sim.github.io/
- Bunny-VisionPro: https://dingry.github.io/projects/bunny_visionpro.html
- Open-TeleVision: https://robot-tv.github.io/
- DexGraspNet: https://pku-epic.github.io/DexGraspNet/
- Mobile ALOHA: https://mobile-aloha.github.io/
- OpenVLA: https://openvla.github.io/
- MS-Human-700: https://lnsgroup.cc/research/MS-Human-700
- HumanPlus: https://humanoid-ai.github.io/
- Octo: https://octo-models.github.io/
- HOI-M3: https://juzezhang.github.io/HOIM3_ProjectPage/
- DrEureka: https://eureka-research.github.io/dr-eureka/
- FLD: https://sites.google.com/view/iclr2024-fld/home
- SATO: https://sato-team.github.io/Stable-Text-to-Motion-Framework/
- ViPlanner: https://leggedrobotics.github.io/viplanner.github.io/
- HumanoidBench: https://humanoid-bench.github.io/
- DexCap: https://dex-cap.github.io/
- RT-Sketch: https://rt-sketch.github.io/
- SARA: https://sites.google.com/view/rtsara/?pli=1
- AutoRT: https://auto-rt.github.io/
- RT-Trajectory: https://rt-trajectory.github.io/
- iGibson: https://svl.stanford.edu/igibson/
- GOAT: https://theophilegervet.github.io/projects/goat/
- Dynamic Handover: https://binghao-huang.github.io/dynamic_handover/
- Eureka: https://eureka-research.github.io/
- Sequential Dexterity: https://sequential-dexterity.github.io/
- From Text to Motion: https://tnoinkwms.github.io/ALTER-LLM/
- MimicGen: https://mimicgen.github.io/
- NOIR: https://noir-corl.github.io/
- BC-Z: https://sites.google.com/view/bc-z/home
- Open-World Object Manipulation using Pre-Trained Vision-Language Models: https://robot-moo.github.io/
- Robotic Skill Acquisition via Instruction Augmentation with Vision-Language Models: https://instructionaugmentation.github.io/
- VIMA: https://vimalabs.github.io/
- CLIPort: https://cliport.github.io/
- EmbodiedGPT: https://embodiedgpt.github.io/
- ASE: https://xbpeng.github.io/projects/ASE/index.html
- RoboAgent: https://robopen.github.io/
- RT-2: https://robotics-transformer2.github.io/
- Do As I Can, Not As I Say: https://say-can.github.io/
- Perceiver-Actor: https://peract.github.io/
- VoxPoser: https://voxposer.github.io/
- Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware: https://tonyzhaozh.github.io/aloha/
- Soft Robotic Dynamic In-Hand Pen Spinning: https://soft-spin.github.io/
- Bimanual Dexterity for Complex Tasks: https://bidex-teleop.github.io/
(2) Autonomous Driving
(3) Embodied Intelligence
Metaverse
(1) Omniverse
(2) Digital Twin
Utilities
(1) Computer Vision
- OpenCV: https://opencv.org/
- roboflow: https://roboflow.com/
- MoGe: https://wangrc.site/MoGePage/
- Cloth-Splatting: https://kth-rpl.github.io/cloth-splatting/
- SpectroMotion: https://cdfan0627.github.io/spectromotion/
- SMITE: https://segment-me-in-time.github.io/
- VistaDream: https://vistadream-project-page.github.io/index.html
- InterMask: https://gohar-malik.github.io/intermask/
- Dessie: https://celiali.github.io/Dessie/
- CoTracker3: https://cotracker3.github.io/
- PointCloud Conditioned Mesh Generation: https://research.nvidia.com/labs/dir/edgerunner/gallery/point_cond_4.html
- Depth Any Video with Scalable Synthetic Data: https://depthanyvideo.github.io/
- MonST3R: https://monst3r-project.github.io/
- EVER: https://half-potato.gitlab.io/posts/ever/
- CoTracker: https://co-tracker.github.io/
- WiLoR: https://rolpotamias.github.io/WiLoR/
- MIMO: https://menyifang.github.io/projects/MIMO/index.html
- M2Mapping: https://jianhengliu.github.io/Projects/M2Mapping/
- 3D Gaussian Splatting for Real-Time Radiance Field Rendering: https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/
- StableNormal: https://stable-x.github.io/StableNormal/
- EmbodiedSAM: https://xuxw98.github.io/ESAM/
- Large Étendue 3D Holographic Display with Content-adpative Dynamic Fourier Modulation: https://bchao1.github.io/holo_dfm/
- Dynamic Gaussian Marbles for Novel View Synthesis of Casual Monocular Videos: https://geometry.stanford.edu/projects/dynamic-gaussian-marbles.github.io/
- EgoHDM: https://handiyin.github.io/EgoHDM/
- DepthCrafter: https://depthcrafter.github.io/
- Spann3R: https://hengyiwang.github.io/projects/spanner
- OpenIns3D: https://zheninghuang.github.io/OpenIns3D/
- Bilateral Reference for High-Resolution Dichotomous Image Segmentation: https://www.birefnet.top/
- ObjectCarver: https://objectcarver.github.io/
- Improving 2D Feature Representations by 3D-Aware Fine-Tuning: https://ywyue.github.io/FiT3D/
- Shape of Motion: https://shape-of-motion.github.io/
- DINO-Tracker: https://dino-tracker.github.io/
- DiffIR2VR-Zero: https://jimmycv07.github.io/DiffIR2VR_web/
- Vidu4D: https://vidu4d-dgs.github.io/
- RaDe-GS: https://baowenz.github.io/radegs/
- Semantic Gaussians: https://sharinka0715.github.io/semantic-gaussians/
- FoundationPose: https://nvlabs.github.io/FoundationPose/
- InstantSplat: https://instantsplat.github.io/
- GS-Pose: https://dingdingcai.github.io/gs-pose/
- I’M HOI: https://afterjourney00.github.io/IM-HOI.github.io/
- MeshLRM: https://sarahweiii.github.io/meshlrm/
- LGM: https://me.kiui.moe/lgm/
- Efficient LoFTR: https://zju3dv.github.io/efficientloftr/
- VideoGigaGAN: https://videogigagan.github.io/
- SpatialTracker: https://henry123-boy.github.io/SpaTracker/
- Key2Mesh: https://key2mesh.github.io/
- Ultralytics YOLO11: https://docs.ultralytics.com/
- Neuralangelo: https://research.nvidia.com/labs/dir/neuralangelo/
- SAMURAI: https://yangchris11.github.io/samurai/
Continue reading Papers