メインコンテンツまでスキップ

「Tech Art」タグの記事が7件件あります

Intersection of art and technical implementation.

全てのタグを見る

Advanced Vertex Displacement for Flesh & Gore

· 約2分
Thang Le
Senior Lead Engineer

Flesh and Gore

The Art of Visceral Realism

In modern horror game development, the "feel" of an environment is often just as important as its visual fidelity. When it comes to organic, fleshy surfaces—think of the pulsating walls in a biological nightmare or the squishy impact of a weapon on a monster—standard static meshes often fall short. This is where Advanced Vertex Displacement comes into play.

Using Unity's Universal Render Pipeline (URP) and Shader Graph, we can move beyond simple texture swaps and into the realm of dynamic, responsive geometry. By manipulating vertex positions in real-time, we create an illusion of depth and movement that feels unsettlingly real.

The Mathematics of Pulsation

The core of a good "flesh" shader is the combination of multiple sine waves operating at different frequencies and amplitudes. We don't want a uniform pulse; that feels mechanical. Instead, we use a Noise-Driven Displacement approach. By sampling a Perlin or Simplex noise texture and using it to offset the vertex normal, we achieve that irregular, organic heaving that characterizes living tissue.

// Conceptual logic for vertex offset
float3 offset = normal * noiseValue * displacementStrength;
positionOS.xyz += offset;

Implementing in URP Shader Graph

  1. Vertex Position Node: Start by getting the object-space position.
  2. Normal Vector Node: Displacement should almost always happen along the vertex normal to maintain the volume's integrity.
  3. Time-Based Noise: Use a Time node multiplied by a speed constant to scroll a 3D noise function.
  4. Tessellation (Optional): For high-end hardware, adding hardware tessellation allows for much finer detail without requiring incredibly dense base meshes.

Performance Considerations

Vertex displacement is computationally cheaper than some might think, especially when handled entirely on the GPU. However, the biggest bottleneck is often Shadow Mapping. Displaced vertices must also be accounted for in the Shadow Caster pass, otherwise, the shadows will remain static while the mesh pulses, breaking the immersion. Ensure your Shader Graph has the "Shadow Caster" pass properly configured to use the same displacement logic.

By layering these techniques with subsurface scattering and a high-quality specular map (to give that "wet" look), you can create environments that don't just look scary—they feel alive.

Compute Shaders for Large-Scale Horror Entities

· 約2分
Thang Le
Senior Lead Engineer

Compute Shaders Swarm

The Power of the Swarm

There is something primal about the fear of being overwhelmed by numbers. A single monster is a threat, but a swarm of thousands of scuttling insects or shadows is a nightmare. Traditionally, simulating thousands of individual AI entities would crush the CPU. To achieve this in real-time, we must move the simulation to the GPU using Compute Shaders.

GPGPU: Beyond Rendering

Compute Shaders allow us to use the massive parallel processing power of the GPU for non-rendering tasks—in this case, flocking behavior and physics. By storing our entity data (position, rotation, velocity) in a StructuredBuffer, we can update thousands of entities simultaneously in a single dispatch call.

The Boids Algorithm

For a convincing swarm, we use a modified Boids algorithm. Each entity follows three simple rules:

  1. Separation: Avoid crowding neighbors.
  2. Alignment: Steer towards the average heading of neighbors.
  3. Cohesion: Steer towards the average position of neighbors.

In a horror context, we add a fourth rule: Targeting/Avoidance. The swarm should actively move toward the player or avoid light sources.

Rendering the Swarm

Once the Compute Shader has updated the positions in the StructuredBuffer, we don't want to send that data back to the CPU (which is slow). Instead, we use GPU Instancing. We provide the buffer directly to a specialized vertex shader that positions the meshes based on the data already sitting in GPU memory.

// In the vertex shader
StructuredBuffer<EntityData> _EntityBuffer;
uint instanceID = UNITY_GET_INSTANCE_ID(v);
float3 pos = _EntityBuffer[instanceID].position;

Performance and Scale

With this architecture, simulating 10,000 entities becomes trivial. The bottleneck shifts from CPU logic to GPU fill rate and vertex processing. To optimize further, we can implement GPU Culling, where the Compute Shader checks if an entity is within the camera frustum before deciding whether it should be rendered.

This technique was used to great effect in titles like A Plague Tale: Innocence for their rat swarms. By leveraging Compute Shaders, indie developers can now achieve a level of scale that was previously reserved for AAA studios, creating truly overwhelming horror experiences.

Motion Capture on an Indie Budget

· 約2分
Hoang Nguyen
Creative Director

Motion Capture

Bringing Humanity to the Horrific

In the past, high-quality motion capture (mocap) was a luxury reserved for AAA studios with massive optical rigs. For an indie studio like Wave0084, animating complex human movements—especially the jittery, unnatural movements required for horror—by hand is an enormous time sink. Fortunately, the rise of "budget" mocap solutions has leveled the playing field.

The Rokoko Advantage

Our primary tool for Lil Sis has been the Rokoko Smartsuit Pro II. Unlike optical systems that require cameras and specialized spaces, the Rokoko suit uses inertial sensors (IMUs). This allows us to capture animations anywhere—even in our cramped studio office.

The beauty of inertial mocap is the speed of iteration. We can act out a scene, record it, and retarget it to our character model in minutes. This allows our actors to experiment with the "shaky" and "contorted" movements that make our creatures so unsettling.

Augmenting with AI

While IMU suits are great for body movement, they struggle with fine detail like fingers and facial expressions. To fill these gaps, we've integrated AI-based video analysis tools like Radical AI and Move.ai.

By filming a reference video alongside the mocap data, we can use these AI tools to extract finger movements and subtle head tilts that the suit might miss. For facial capture, we use the Apple ARKit via a standard iPhone, which provides surprisingly high-fidelity blendshape data that maps directly onto our characters in Unity.

Post-Processing the Performance

Mocap is rarely "plug and play." The data often requires cleaning to remove foot sliding or jitters. We use Autodesk MotionBuilder and Unity's Animation Rigging package to:

  1. IK Pass: Ensure feet stay firmly planted on the ground.
  2. Layered Animation: Add hand-keyed flourishes on top of the mocap data to emphasize specific "scary" movements.

By combining these budget-friendly tools, we've been able to achieve a level of animation fidelity that previously would have cost hundreds of thousands of dollars. It's a testament to the "indie-fication" of high-end tech.

Optimizing Ray-Traced Shadows for Low-End GPUs

· 約2分
Thang Le
Senior Lead Engineer

Ray Tracing

The Shadow of the Next Gen

Ray tracing (RT) offers unparalleled realism, especially in horror where shadows are a primary narrative tool. However, the performance cost is often prohibitive for players without high-end RTX hardware. At Wave0084, we've developed a Hybrid Shadow System that brings the benefits of ray-traced shadows to a wider audience.

The Hybrid Approach

The core idea is simple: only use ray tracing where it matters most. For distant objects or subtle secondary shadows, we stick with traditional shadow maps. For hero objects and close-range shadows, we enable RT.

In Unity's High Definition Render Pipeline (HDRP), we use Ray Tracing Quality Levels.

  1. Denoising: The most expensive part of RT is often the denoising pass. By using a more aggressive, lower-resolution denoiser, we can significantly reduce the GPU load while maintaining the "soft" look of ray-traced shadows.
  2. Max Ray Length: By capping the distance a ray can travel, we prevent the GPU from calculating shadows for objects far outside the player's immediate focus.

Resolution Scaling and STP

We also leverage Spatial Temporal Post-Processing (STP). By rendering the ray-traced shadow pass at half the native resolution and then using STP to upsample and sharpen the result, we can achieve nearly identical visual quality at a fraction of the cost.

Dynamic LOD for RT

Not every object needs ray-traced shadows all the time. We implemented a C# system that dynamically toggles the Ray Tracing flag on mesh renderers based on:

  • Distance from Camera: Only objects within 10 meters receive RT shadows.
  • Light Intensity: If a light source is too dim for shadows to be clearly visible, we fall back to shadow maps.

These optimizations allow us to support ray tracing as an "Ultra" setting that is actually playable on mid-range hardware (like an RTX 3060), rather than it being a feature only accessible to a tiny fraction of our player base.

Photogrammetry for Horror Environments

· 約2分
Thang Le
Senior Lead Engineer

Photogrammetry

The Texture of Decay

Horror lives in the details: the peeling wallpaper of an abandoned asylum, the rusted hinges of a cellar door, the moss-covered stone of a graveyard. Hand-authoring these "messy" textures can be incredibly time-consuming. This is why Photogrammetry—the process of creating 3D meshes and textures from photographs—has become a cornerstone of our environment pipeline at Wave0084.

Capturing the Real World

The process begins with a high-resolution camera and a lot of patience. We look for real-world locations that exhibit the kind of "natural chaos" that is hard to simulate. Using tools like RealityCapture or Metashape, we process hundreds of photos into a high-poly mesh.

For horror, the "imperfections" are what we're after. The way a piece of wood has rotted or the specific pattern of a blood-like stain on concrete. Photogrammetry captures these nuances with a level of fidelity that "clean" procedural textures often lack.

The Game-Ready Pipeline

A raw photogrammetry scan is millions of polygons—far too heavy for real-time rendering. Our pipeline involves:

  1. Retopology: Creating a clean, low-poly version of the scan.
  2. Baking: Transferring the high-poly detail onto the low-poly mesh via Normal and Displacement maps.
  3. Delighting: This is the most critical step. Photos contain baked-in lighting information. We use tools like Unity's ArtEngine or Adobe Sampler to remove this lighting, resulting in "PBR-ready" Albedo maps that react correctly to our game's dynamic lights.

Integrating with URP

In Unity's Universal Render Pipeline, these high-fidelity assets shine when combined with Detail Maps and Decals. By layering a photogrammetric "base" with procedural "grime" decals, we can create environments that feel unique and grounded in reality.

Photogrammetry isn't about replacing artists; it's about giving them a more realistic starting point. It allows us to spend less time on the "grunt work" of modeling bricks and more time on the "art" of lighting and atmosphere.

Outlast: The Night Vision Aesthetic Breakdown

· 約2分
Thang Le
Senior Lead Engineer

Outlast Night Vision

Seeing in the Dark

Outlast is famous for its "Found Footage" aesthetic, specifically the green-tinted night vision of the player's camcorder. This isn't just a stylistic choice; it's a core gameplay mechanic that fundamentally changes how the player perceives the environment. Let's break down the technical layers that make this effect so iconic.

The Layers of the Effect

A convincing night vision effect is more than just a green color filter. In Outlast, it's a combination of several post-processing passes:

  1. Luminance Boost: The scene is rendered with a very high exposure, blowing out the highlights to simulate how real night vision tubes intensify light.
  2. Monochromatic Grading: The entire image is mapped to a specific green-to-black color ramp.
  3. Film Grain and Noise: High-frequency digital noise is added to simulate the sensor noise of a camera operating in low light.
  4. Vignetting and Distortions: Subtle lens distortion and a heavy vignette create a sense of looking through a viewfinder, increasing the player's feeling of claustrophobia.

The "Eyes" of the Enemy

One of the most terrifying aspects of Outlast's night vision is how it handles reflections. Enemies' eyes are given a high-intensity emissive material that only appears bright when viewed through the camcorder. This creates the "glow-in-the-dark" look of a predator's eyes, allowing the player to spot threats in pitch-black areas—at the cost of their limited battery life.

Implementation in URP

To recreate this in Unity, we use a Custom Post-Processing Effect.

  • We use a Full Screen Pass in the Render Feature.
  • The shader samples the camera texture and applies a Dot(color, float3(0.3, 0.59, 0.11)) to get the grayscale value.
  • This value is then used as an index for a Gradient Map (or a 1D Texture Ramp) to apply the green tint.

Outlast proved that by limiting the player's vision through a technical lens, you can make them feel more connected to the world while simultaneously making them feel more vulnerable. The camcorder isn't just a tool; it's the player's only, fragile lifeline.

Advanced Decal Systems for Bloody Footprints

· 約2分
Thang Le
Senior Lead Engineer

Decals

The Trail of Terror

Bloody footprints are a staple of horror storytelling. They guide the player, build tension, and tell a silent story of a struggle. However, in a game where both the player and the monsters can leave dynamic trails, managing thousands of decals can quickly become a performance nightmare.

The Problem with Standard Decals

Unity's standard Decal Projector is powerful but can be expensive when used in high quantities. Each projector is essentially a specialized camera pass. If you have 500 bloody footprints in a hallway, you're looking at a significant hit to your draw calls and GPU fill rate.

The Solution: GPU-Driven Decal Atlasing

For Lil Sis, we developed a custom decal system that bypasses the standard projectors for small, repetitive details like footprints.

  1. The Atlas: We store all our blood variations in a single high-res texture atlas.
  2. The Mesh Batcher: Instead of a "Projector," we use a single mesh that is generated on the fly. When a footprint is created, we add its vertices to a Dynamic Mesh. This allows us to render hundreds of footprints in a Single Draw Call.
  3. Projector-Less Projection: We use a custom shader that performs the projection logic in the vertex shader. By passing the ground's normal and position data, we can "shrink-wrap" the footprint mesh to the surface.

Fade-out and Memory Management

To prevent memory leaks, we use a Circular Buffer for our decals. When the buffer is full, the oldest footprint simply fades out and its vertices are repurposed for the newest one.

// In the decal shader
float alpha = smoothstep(_MaxLifetime, _MinLifetime, v.age);
col.a *= alpha;

This system allows us to have rooms covered in blood and footprints without the player ever noticing a dip in performance. It's a perfect example of how "technical tricks" can be used to support "narrative atmosphere."