3 answers
Asked
620 views
what exactly is modeling, texturing, animation, rendering, set design, special effects?
I know it used in movies and social media sometimes to create human looking objects feel alive but what do you have to do for each like, what is rendering and why is it important what is the importance of texturing? that's what I wanna know what is the point of each of the things listed and what do they do if that makes sense.
Login to comment
3 answers
Updated
Abdul’s Answer
Hi again! I can try and walk you through the process step by step (feel free to reach out with follow-ups if you want)
Let's say you want to create a 3D scene of a basketball player throwing a ball in a school gym.
You start with a couple of images as reference: a side/front/top views of the character, in this case, a guy (you mainly want the shapes and proportions of the body, face details..etc) and this applies to anything you want to model pretty much. like this: https://ucbugg.github.io/learn.ucbugg/images/organic-modeling/orgmodel_002.jpg
Once you import your references and set them up within the software you create a simple shape.
All of these shapes (cube, sphere, cone,...whatever else) are built the same:
- Face(s)
- Edge(s)
-Vertix (singular)/Vertices (plural)
https://d138zd1ktt9iqe.cloudfront.net/media/seo_landing_files/vertex-of-angle-02-1648461511.png
You take your simple cube and you subdivide it to create smaller faces and more vertices and start moving things around matching the reference in the different views (since you're doing it in 3D space, it needs to make sense in all angles)
https://i.pinimg.com/originals/b4/19/99/b41999f895b0690244bba026a684ce47.jpg
After you have your model and you're happy with it, you create a rigging system. it basically means adding "bones" to your model.
https://d3kjluh73b9h9o.cloudfront.net/original/3X/1/f/1f14c2db46563345eb33eb2be443692820eda1ae.jpeg
You attach parts of your model to each bone and how much it can influence the model itself. (making sure arms don't bend backward or head twisting all the way around). Now you can take the model and move the bones and rotate them to create the movement which you can animate however you want.
Let's say all the above is done according to the guy playing basketball scene you're working on. you create the gym the same way as well as all the other elements in the environment of your scene. How detailed you want to go is totally up to you!
So, you have a fully modeled and animated scene, but everything looks flat grey! This is where Texturing and Rendering come to play.
(for the sake of simplicity we'll just focus on the main character for now)
You have the basketball player that has skin, wearing clothes, shoes, and holding the ball. All these elements would need to have different materials where you can change the color, change the transparency, reflection, or add an image of fabric and some bump/normal/displacement maps to fake the look of the threads on the cloths or the skin imperfections rather than the default plastic-y grey.
https://media.sketchfab.com/models/748c49515a1b4e198c9ace516355a156/thumbnails/7341298d7c7c43a68eeb10aae02d7d0c/620b833bcaca429c9c6d31a0301dc81d.jpeg
After you've textured everything you can start with the Rendering process. You add the lights to the scene. You add the indirect lights: sunlight shining through the window or an HDRI (a high-resolution image with extra information that you can add onto a sphere or dome that incases your whole scene and creates soft lights and reflections depending on the image you use) and then add your direct lights: lights from the ceiling of the gym or anything else) maybe you add some atmosphere/volumetric effects like sun rays coming through those big gym windows
https://i.ytimg.com/vi/Wt-gwdxo-x8/maxresdefault.jpg
You do a test render with low settings to make sure everything looks good then you pump up your settings for the final output, you hit render and the magic happens! your computer will calculate everything you've done within the scene and pretty much everything can affect the render time: if you have high-resolution textures or very detailed 3D objects/scene it'll take longer to render.
I tried to be as detailed as possible without being overwhelming!
Abdul
Check out this series: https://www.youtube.com/watch?v=At9qW8ivJ4Q
Let's say you want to create a 3D scene of a basketball player throwing a ball in a school gym.
You start with a couple of images as reference: a side/front/top views of the character, in this case, a guy (you mainly want the shapes and proportions of the body, face details..etc) and this applies to anything you want to model pretty much. like this: https://ucbugg.github.io/learn.ucbugg/images/organic-modeling/orgmodel_002.jpg
Once you import your references and set them up within the software you create a simple shape.
All of these shapes (cube, sphere, cone,...whatever else) are built the same:
- Face(s)
- Edge(s)
-Vertix (singular)/Vertices (plural)
https://d138zd1ktt9iqe.cloudfront.net/media/seo_landing_files/vertex-of-angle-02-1648461511.png
You take your simple cube and you subdivide it to create smaller faces and more vertices and start moving things around matching the reference in the different views (since you're doing it in 3D space, it needs to make sense in all angles)
https://i.pinimg.com/originals/b4/19/99/b41999f895b0690244bba026a684ce47.jpg
After you have your model and you're happy with it, you create a rigging system. it basically means adding "bones" to your model.
https://d3kjluh73b9h9o.cloudfront.net/original/3X/1/f/1f14c2db46563345eb33eb2be443692820eda1ae.jpeg
You attach parts of your model to each bone and how much it can influence the model itself. (making sure arms don't bend backward or head twisting all the way around). Now you can take the model and move the bones and rotate them to create the movement which you can animate however you want.
Let's say all the above is done according to the guy playing basketball scene you're working on. you create the gym the same way as well as all the other elements in the environment of your scene. How detailed you want to go is totally up to you!
So, you have a fully modeled and animated scene, but everything looks flat grey! This is where Texturing and Rendering come to play.
(for the sake of simplicity we'll just focus on the main character for now)
You have the basketball player that has skin, wearing clothes, shoes, and holding the ball. All these elements would need to have different materials where you can change the color, change the transparency, reflection, or add an image of fabric and some bump/normal/displacement maps to fake the look of the threads on the cloths or the skin imperfections rather than the default plastic-y grey.
https://media.sketchfab.com/models/748c49515a1b4e198c9ace516355a156/thumbnails/7341298d7c7c43a68eeb10aae02d7d0c/620b833bcaca429c9c6d31a0301dc81d.jpeg
After you've textured everything you can start with the Rendering process. You add the lights to the scene. You add the indirect lights: sunlight shining through the window or an HDRI (a high-resolution image with extra information that you can add onto a sphere or dome that incases your whole scene and creates soft lights and reflections depending on the image you use) and then add your direct lights: lights from the ceiling of the gym or anything else) maybe you add some atmosphere/volumetric effects like sun rays coming through those big gym windows
https://i.ytimg.com/vi/Wt-gwdxo-x8/maxresdefault.jpg
You do a test render with low settings to make sure everything looks good then you pump up your settings for the final output, you hit render and the magic happens! your computer will calculate everything you've done within the scene and pretty much everything can affect the render time: if you have high-resolution textures or very detailed 3D objects/scene it'll take longer to render.
I tried to be as detailed as possible without being overwhelming!
Abdul
Abdul recommends the following next steps:
thank you so much this was simple enough for me to understand also thanks for the links this looks amazing and challenging but it'll be a fun challenge
Jacob
+1. I also learned a TON from this. Thank you!
Jared Chung, Admin
Siddhant Bal
3D Artist, Instructor, Technical Artist (Shaders), Graphics Programmer, Lighting Artist
1
Answer
New Hill, North Carolina
Updated
Siddhant’s Answer
Hello!
Sure, let's start with understanding a bit about the basic pipeline that is involved in the CG industry for a digital asset creation:
Concept Art (illustrating the idea) -> 3D Modelling (creating the prop) -> 3D Texturing (and Shading) and 3D Animation (the look of the asset and the "acting) -> 3D Lighting -> 3D Compositing
This is a pretty foundational structure followed everywhere, with people being able to go back and tweak anything within limitations (also part of the reason why costs (time and money) increase if the scene is not properly visualized). From here, we can discuss about the process laid out for any digital asset (say, a 6-faced dice):
1. It is all created/modelled in a virtual 3D space of a particular software application (Maya, Blender, Cinema4D, etc.) inside a computer. Let's term it as "A".
2. The 3D space in "A" means the X, Y and Z coordinate axes, meaning if we were to create a cube in this space, that cube will in the very least have 8 vertices, 10 edges connecting them and 6 faces, with each vertex having coordinates (x, y, z).
3. These vertices, faces and edges are accordingly processed by the computer every time any change is made to this cube, meaning if more edges were drawn (like diagonals running across faces) or mainly more vertices are added on the cube, the longer it takes to process it (albeit this small asset will take miniscule time).
4. The cube is then cut like how we make a box with 6 square cardboard pieces in real life (joined with tape), but in reverse. This is done in another space called the UV, where the model is viewed in a 2D space, with U and V coordinates signifying the X and Y coordinate axes. This cut out, or "UV unwrapping" as we call it, is done and based on the tile placement (6 tiles/faces), we can then proceed to the next step: texturing.
5. As it is a simple die, we can proceed with texturing by either taking it and painting it in 3D painting tools "B" (Substance Painter, Mari. Marmoset Toolbag, etc.) OR we can take the UV map and paint simply in 2D illustration tools "C" (like Adobe Photoshop, Krita, MS Paint, etc.). Either of the processes generates a 2D image (jpeg, png, etc. file formats) that we can use as a "texture map" in "A" again.
6. On the side (after UV unwrapping and the prop is finalized), we animate our die by moving the 3D object along with placing a 3D virtual camera facing the 3D asset in an angle which allows proper visibility of the prop. The animation is done by moving the die, with our animation defined by the timeline. This timeline is based on our frame rate (24 frames per second (fps) is the general tv shows or movies or cartoons we watch, with every pause we see a frame) and amount of time we are allowing that movement (say, 2-5 seconds, meaning 48 fps - 120 fps). We slide across this timeline/timeslider and key (lock) each of the movements we do. The best thing about 3D animation is that we don't have to repeat this process of keying each frame and just lock the crucial points. The rest is animated, allowing the vertices to move in the 3D space between keys/frames like they're moving on a line equation (generally quadratic).
7. Our asset is now ready, but our software still doesn't know if the particular prop we created should behave like a plastic/metal/any material. By default, it should look like a matte finished die, unless we've done artificial shading in software "B" or "C".
8. We check how it looks like by adding in a 3D light to the scene. Remember, the lights are still virtual and are not a completely accurate depiction of the real-life lighting. As such, artists modify the value/add in multiple to match/improve on the lighting.
9. To make it behave like plastic, we have things called "materials" to which our texture map was connected. The default material varies from software to software, so we'll consider a case where the material has no "shininess" or "reflectivity" to it.
10. Here in the material, we tweak the "shininess" and "reflectivity" values with provided slider values like "glossiness"/"specular"/"roughness".
11. We add a 3D table, a 3D cloth table mat and a 3D plate, with each having its own materials and textures, all visible from our camera's view. This in tandem defines our scene's "set design".
12. During 8 to 11, we go back and forth with the process of "rendering", wherein we render from the virtual camera's view of this virtual space. This generates the final image.
If we go ahead and save this rendered image, we now have a digital image.
Points to note:
1. Software "B" helps smoothen the process of material creation, as it has its own default lighting. Upon finishing the texturing here, we can directly export the material into "A". This is now much more adopted by the industry as the process involving "C" is now just used for tweaking the texture maps ("C" has other uses other than creating/tweaking texture maps).
2. Texturing and "Shading"/Materials are an important aspect of the CG pipeline. They define that the object is plastic/cloth/skin/glass. They go hand-in-hand with lighting as well. It's my favorite aspect.
3. The rendered images can also have "transparency"/"opacity", allowing one to merge it on top of another image, thus helping us understand "compositing" in VFX movies, where the real-life scene's lighting helps the artist tweak the asset in a way that makes it acceptable in the scene.
Hope this answer helps as well!
Sure, let's start with understanding a bit about the basic pipeline that is involved in the CG industry for a digital asset creation:
Concept Art (illustrating the idea) -> 3D Modelling (creating the prop) -> 3D Texturing (and Shading) and 3D Animation (the look of the asset and the "acting) -> 3D Lighting -> 3D Compositing
This is a pretty foundational structure followed everywhere, with people being able to go back and tweak anything within limitations (also part of the reason why costs (time and money) increase if the scene is not properly visualized). From here, we can discuss about the process laid out for any digital asset (say, a 6-faced dice):
1. It is all created/modelled in a virtual 3D space of a particular software application (Maya, Blender, Cinema4D, etc.) inside a computer. Let's term it as "A".
2. The 3D space in "A" means the X, Y and Z coordinate axes, meaning if we were to create a cube in this space, that cube will in the very least have 8 vertices, 10 edges connecting them and 6 faces, with each vertex having coordinates (x, y, z).
3. These vertices, faces and edges are accordingly processed by the computer every time any change is made to this cube, meaning if more edges were drawn (like diagonals running across faces) or mainly more vertices are added on the cube, the longer it takes to process it (albeit this small asset will take miniscule time).
4. The cube is then cut like how we make a box with 6 square cardboard pieces in real life (joined with tape), but in reverse. This is done in another space called the UV, where the model is viewed in a 2D space, with U and V coordinates signifying the X and Y coordinate axes. This cut out, or "UV unwrapping" as we call it, is done and based on the tile placement (6 tiles/faces), we can then proceed to the next step: texturing.
5. As it is a simple die, we can proceed with texturing by either taking it and painting it in 3D painting tools "B" (Substance Painter, Mari. Marmoset Toolbag, etc.) OR we can take the UV map and paint simply in 2D illustration tools "C" (like Adobe Photoshop, Krita, MS Paint, etc.). Either of the processes generates a 2D image (jpeg, png, etc. file formats) that we can use as a "texture map" in "A" again.
6. On the side (after UV unwrapping and the prop is finalized), we animate our die by moving the 3D object along with placing a 3D virtual camera facing the 3D asset in an angle which allows proper visibility of the prop. The animation is done by moving the die, with our animation defined by the timeline. This timeline is based on our frame rate (24 frames per second (fps) is the general tv shows or movies or cartoons we watch, with every pause we see a frame) and amount of time we are allowing that movement (say, 2-5 seconds, meaning 48 fps - 120 fps). We slide across this timeline/timeslider and key (lock) each of the movements we do. The best thing about 3D animation is that we don't have to repeat this process of keying each frame and just lock the crucial points. The rest is animated, allowing the vertices to move in the 3D space between keys/frames like they're moving on a line equation (generally quadratic).
7. Our asset is now ready, but our software still doesn't know if the particular prop we created should behave like a plastic/metal/any material. By default, it should look like a matte finished die, unless we've done artificial shading in software "B" or "C".
8. We check how it looks like by adding in a 3D light to the scene. Remember, the lights are still virtual and are not a completely accurate depiction of the real-life lighting. As such, artists modify the value/add in multiple to match/improve on the lighting.
9. To make it behave like plastic, we have things called "materials" to which our texture map was connected. The default material varies from software to software, so we'll consider a case where the material has no "shininess" or "reflectivity" to it.
10. Here in the material, we tweak the "shininess" and "reflectivity" values with provided slider values like "glossiness"/"specular"/"roughness".
11. We add a 3D table, a 3D cloth table mat and a 3D plate, with each having its own materials and textures, all visible from our camera's view. This in tandem defines our scene's "set design".
12. During 8 to 11, we go back and forth with the process of "rendering", wherein we render from the virtual camera's view of this virtual space. This generates the final image.
If we go ahead and save this rendered image, we now have a digital image.
Points to note:
1. Software "B" helps smoothen the process of material creation, as it has its own default lighting. Upon finishing the texturing here, we can directly export the material into "A". This is now much more adopted by the industry as the process involving "C" is now just used for tweaking the texture maps ("C" has other uses other than creating/tweaking texture maps).
2. Texturing and "Shading"/Materials are an important aspect of the CG pipeline. They define that the object is plastic/cloth/skin/glass. They go hand-in-hand with lighting as well. It's my favorite aspect.
3. The rendered images can also have "transparency"/"opacity", allowing one to merge it on top of another image, thus helping us understand "compositing" in VFX movies, where the real-life scene's lighting helps the artist tweak the asset in a way that makes it acceptable in the scene.
Hope this answer helps as well!
Updated
Annaleigh’s Answer
Hi Jacob!
These are great questions! I will answer how I can, but I also recommend YouTube!
Modeling is the process of creating 3D representations of objects, characters, or environments using software like Blender or Maya. Texturing involves applying surface details, colors, and materials to these models, enhancing their realism. Animation brings these models to life by creating movement through keyframes and motion paths.
Rendering is the final step where the 3D scene is processed to produce a 2D image or animation, calculating light interactions and generating the final output. Set design focuses on creating the physical or digital environments where scenes occur, establishing mood and context. Special effects (SFX) encompass techniques that create visual illusions, including practical effects like explosions and digital effects like CGI.
Together, these elements work to craft immersive experiences in films, video games, and animations. I wish you luck!
These are great questions! I will answer how I can, but I also recommend YouTube!
Modeling is the process of creating 3D representations of objects, characters, or environments using software like Blender or Maya. Texturing involves applying surface details, colors, and materials to these models, enhancing their realism. Animation brings these models to life by creating movement through keyframes and motion paths.
Rendering is the final step where the 3D scene is processed to produce a 2D image or animation, calculating light interactions and generating the final output. Set design focuses on creating the physical or digital environments where scenes occur, establishing mood and context. Special effects (SFX) encompass techniques that create visual illusions, including practical effects like explosions and digital effects like CGI.
Together, these elements work to craft immersive experiences in films, video games, and animations. I wish you luck!