Sumin Cho
3DGS Self-Reconstruction
M.S. Candidate · Computer Vision & Graphics

Sumin Cho

I am a second-year Master student at KAIST Visual Media Lab advised by Prof. Junyong Noh.

I work at the intersection of computer vision and computer graphics, focused on controllable generative AI for 3D content. My research explores diffusion and flow-based models — leveraging 2D diffusion priors or combining 3D flow model latents with neural representations for stylization, editing, and reconstruction.

Controllable Generative AI Diffusion & Flow Models 3D Stylization & Editing 3D Reconstruction 3D Deformation Video and 4D Editing
Scroll
00 — Quick Snapshot

About

Education
M.S. Computer Vision & Graphics · GPA 4.1/4.3
KAIST · Visual Media Lab (Prof. Junyong Noh)
Sep 2024 – Aug 2026 (expected)
B.S. Art&Tech · Software · Media · Triple Major
Sogang University
Mar 2019 – Aug 2024
Study Abroad · University of Georgia
Industry & Research
Software Engineer
Abear (Windly) — Internship to Freelance
Jul 2024–Present
Student Researcher
Media AI Lab, Sogang Univ. (Prof. Moon Jung)
Mar 2023–Jul 2024
Founder & Product Developer
Handyy — Foot-tech Startup
Mar 2021–Dec 2022
Service & Leadership
Reviewer — ACM CHI 2026 Posters
ACM SIGCHI
Graduate Student Council President
KAIST Culture Technology
Exhibition Staff
CES 2024 · SIGGRAPH Asia 2025
Honors
Certificate of Excellence — TA
KAIST Education 4.0Q · Sep 2025
Excellence Award
Samsung Securities Digital IT · Dec 2023
Technical Skills
PythonC++CUDA PyTorchDiffusersNerfstudio 3DGSBlenderMaya
01 — Publications

Research & Publications

[1]
In Submission
Stylized 3D Editing from a Single Image via Attention Manipulation
A 3D stylization framework ensuring strict multi-view consistency through cross-view attention manipulation in diffusion models — identity-preserving style transfer from a single reference image to 3DGS representations.
[2]
In Submission
AnyWear: Text-Driven Garment Generation via Mask Estimation
Text-guided garment template generation for arbitrary 3D characters via a DGCNN-based highlighting model, with diffusion-based deformation producing character-fitted garment meshes.
[3]
Submitted
Sensory Kitchen: AI-Based Cooking Education System for Visually Impaired
Multimodal cooking assistant combining generative AI and tactile interaction for accessibility-focused cooking education.
[4]
arXiv 2024
Published
ConTEXTure: Consistent Multi-view Texture Generation for 3D Meshes
Simultaneous multi-view texture generation pipeline addressing viewpoint bias in diffusion-based 3D texturing.
02 — Projects

Selected Projects

01
IllusionCraft: Diffusion-Optimized Textures for Transforming Surfaces
3D visual illusions via multi-view optimization — shared-face origami and view-dependent cube illusions.
Diffusion3D TexturingMulti-view
02
3D Virtual Indoor Scene Generation via LLM and RAG
Automated 3D scene generation with LLMs and RAG. Exhibited at SIGGRAPH 2025 (KOCCA).
LLMRAG3D Generation
03
Stylized Infinite-Zoom 3D Scene Generation
Full 3D infinite-zoom pipeline combining Gaussian Splatting and 3D style transfer.
3DGSStyle TransferScene Blending
04
Few-Step Diffusion (EDM + Consistency Distillation)
EDM teacher + Consistency Distillation student for NFE=1/2/4 fast sampling. Competitive FID.
DiffusionEDMDistillation
05
3D Virtual Interior Design Service
Deep learning platform for realistic virtual spaces from video using NeRF and depth estimation.
NeRF3D ReconstructionDepth
06
Textile Generation for PBR-Ready Fabric Synthesis
Generative model for diverse fabric patterns in PBR workflows. Collaboration with SwatchOn Inc.
GenerativePBRTexture Synthesis
03 — Paper Studies

Research Readings

50+ papers shared with research group · 18+ months · 4 core areas

🧊
3D Representation
NeRF · 3DGS
Neural Fields
Coming soon
🌀
Diffusion Models
Score Matching
DDPM · Flow
Coming soon
Generative Models
GAN · VAE
Normalizing Flows
Coming soon
👤
Human / Avatar
FLAME · Body Pose
Avatars · Face
Coming soon