跳到主要内容

12 篇博文 含有标签「Unity」

Technical deep dives into Unity Engine.

查看所有标签

Binaural Audio: The Secret to True Fear

· 阅读需 2 分钟
Thang Le
Senior Lead Engineer

Binaural Audio

Hearing the Unseen

In horror, what the player doesn't see is often more terrifying than what they do. Sound is the primary tool for building this "unseen" threat. While standard stereo panning provides some directionality, Binaural Audio (using Head-Related Transfer Functions, or HRTF) creates a true 3D soundscape that tricks the brain into perceiving sounds as coming from specific points in 3D space—including above, below, and behind.

The Science of HRTF

HRTF accounts for how the human ear, head, and torso filter sound based on its arrival angle. Our brains use these subtle changes in frequency and timing to locate a sound source. By applying these filters to digital audio in real-time, we can simulate a sound originating from anywhere around the player's head.

Implementing in Unity

Unity provides several options for spatial audio. For a professional horror title, relying solely on the built-in spatializer is often insufficient.

  1. Oculus Spatializer or SteamAudio: These plugins offer robust HRTF implementations that work across various platforms. They allow for advanced features like Acoustic Propagation.
  2. Real-time Occlusion: If a monster is behind a door, the sound shouldn't just be quieter; it should be muffled. Using low-pass filters driven by raycasts from the audio source to the listener is essential for immersion.
  3. Reverb Zones: Dynamic reverb that changes based on the volume of the room (e.g., a small tiled bathroom vs. a massive vaulted cathedral) adds a sense of "place" to the audio.

The Psychological Impact

In a "hide and seek" horror game, binaural audio is a gameplay mechanic. The player must listen for the subtle creak of a floorboard or the wet breath of a stalker to survive. By using Audio Cues that are specifically positioned behind the player, you trigger a primal "fight or flight" response.

A pro tip for horror devs: use Infrasound (very low-frequency sounds, usually below 20Hz). While players might not "hear" it, these frequencies are known to cause feelings of anxiety and unease in humans. Layering these subtle tones into your binaural soundscape can elevate the tension without the player ever knowing why they feel so disturbed.

Custom Scriptable Render Features for Glitch Effects

· 阅读需 2 分钟
Thang Le
Senior Lead Engineer

Glitch Effects

Disrupting Reality

The "glitch" aesthetic has become synonymous with psychological horror and tech-noir. Whether it's the screen tearing of a found-footage camera or the visual degradation of a character losing their mind, these effects need to be more than just a simple overlay. To achieve high-performance, high-fidelity glitching in Unity's Universal Render Pipeline (URP), we must leverage Scriptable Render Features.

Why Not Just Post-Processing?

While Unity's Post-Processing Stack is powerful, it can be restrictive when you need to inject custom logic into specific parts of the rendering pipeline. Scriptable Render Features allow us to:

  • Draw specific objects with custom shaders.
  • Create multiple passes with intermediate textures.
  • Execute logic before or after specific URP events (e.g., After Transparent, Before Post-Processing).

Building the Glitch Pass

A convincing digital glitch usually consists of three core components:

  1. Chromatic Aberration: Splitting the RGB channels. In a glitch, this should be jittery and non-uniform.
  2. Block Displacement: Shifting random rectangular segments of the screen horizontally.
  3. Scanline Interferences: Adding subtle horizontal lines and "static" noise.

In our custom ScriptableRenderPass, we grab the cameraColorTarget, blit it to a temporary render texture using our "Glitch Shader," and then blit it back.

public override void Execute(ScriptableRenderContext context, ref RenderingData renderingData)
{
CommandBuffer cmd = CommandBufferPool.Get("GlitchPass");
// Execution logic here...
context.ExecuteCommandBuffer(cmd);
CommandBufferPool.Release(cmd);
}

Driving the Sanity Meter

The beauty of a custom Render Feature is how easily it can be tied to gameplay systems. By exposing a glitchIntensity parameter in the Render Feature, we can drive it from a "Sanity" or "Corruption" script in C#. As the player's sanity drops, we increase the frequency and amplitude of the block displacement and the severity of the chromatic aberration.

By using a ComputeBuffer or a simple GlobalFloat, we can update these values once per frame on the GPU, ensuring that even the most chaotic visual meltdowns don't impact our CPU performance. This allows for a seamless transition from a clear image to a fractured, terrifying reality.

Compute Shaders for Large-Scale Horror Entities

· 阅读需 2 分钟
Thang Le
Senior Lead Engineer

Compute Shaders Swarm

The Power of the Swarm

There is something primal about the fear of being overwhelmed by numbers. A single monster is a threat, but a swarm of thousands of scuttling insects or shadows is a nightmare. Traditionally, simulating thousands of individual AI entities would crush the CPU. To achieve this in real-time, we must move the simulation to the GPU using Compute Shaders.

GPGPU: Beyond Rendering

Compute Shaders allow us to use the massive parallel processing power of the GPU for non-rendering tasks—in this case, flocking behavior and physics. By storing our entity data (position, rotation, velocity) in a StructuredBuffer, we can update thousands of entities simultaneously in a single dispatch call.

The Boids Algorithm

For a convincing swarm, we use a modified Boids algorithm. Each entity follows three simple rules:

  1. Separation: Avoid crowding neighbors.
  2. Alignment: Steer towards the average heading of neighbors.
  3. Cohesion: Steer towards the average position of neighbors.

In a horror context, we add a fourth rule: Targeting/Avoidance. The swarm should actively move toward the player or avoid light sources.

Rendering the Swarm

Once the Compute Shader has updated the positions in the StructuredBuffer, we don't want to send that data back to the CPU (which is slow). Instead, we use GPU Instancing. We provide the buffer directly to a specialized vertex shader that positions the meshes based on the data already sitting in GPU memory.

// In the vertex shader
StructuredBuffer<EntityData> _EntityBuffer;
uint instanceID = UNITY_GET_INSTANCE_ID(v);
float3 pos = _EntityBuffer[instanceID].position;

Performance and Scale

With this architecture, simulating 10,000 entities becomes trivial. The bottleneck shifts from CPU logic to GPU fill rate and vertex processing. To optimize further, we can implement GPU Culling, where the Compute Shader checks if an entity is within the camera frustum before deciding whether it should be rendered.

This technique was used to great effect in titles like A Plague Tale: Innocence for their rat swarms. By leveraging Compute Shaders, indie developers can now achieve a level of scale that was previously reserved for AAA studios, creating truly overwhelming horror experiences.

Transitioning to Unity 6: Wave0084 Strategy

· 阅读需 2 分钟
Hoang Nguyen
Creative Director

Unity 6

Embracing the Future of Indie Horror

At Wave0084, we've always been cautious about engine updates mid-production. However, the release of Unity 6 has brought a suite of features that are simply too impactful to ignore for our upcoming title, Lil Sis. This post outlines our technical rationale and the strategy we're employing to ensure a smooth transition.

Performance: The Core Driver

The primary reason for our jump to Unity 6 is the significant improvement in the Universal Render Pipeline (URP). Specifically, the introduction of GPU Resident Drawer and Spatial Temporal Post-Processing (STP) allows us to push much higher visual fidelity on target hardware like the Steam Deck and mid-range laptops.

For Lil Sis, which relies heavily on dense environmental detail and complex lighting, the GPU Resident Drawer significantly reduces our CPU-side draw call overhead. This allows us to allocate more CPU cycles to our advanced AI systems without sacrificing frame rate.

Graphics and Lighting

Unity 6's enhancements to Adaptive Probe Volumes (APV) are a game-changer for horror. Achieving realistic, moody lighting in dynamic environments has always been a struggle. APV allows for much faster iteration times and better light leakage prevention, which is crucial for maintaining the "darkness" that defines our aesthetic.

We are also leveraging the new Render Graph API. This gives our technical artists granular control over the rendering pipeline, allowing us to implement the "Sanity Glitch" effects (discussed in a previous post) more efficiently and with less boilerplate code.

The Migration Strategy

A transition like this is never without risk. Our strategy involves:

  1. Isolated Branching: The engine upgrade is handled in a dedicated Git branch, separate from the main production line.
  2. Asset Audit: Every shader and custom render feature is being audited for compatibility with the new Render Graph.
  3. Automated Testing: We've expanded our suite of smoke tests to verify that lighting and physics remain consistent across the version jump.

Transitioning to Unity 6 is an investment in the longevity and quality of Lil Sis. It allows us to build on a more stable, performant foundation, ensuring that players have the best possible experience when the game launches.

Optimizing Ray-Traced Shadows for Low-End GPUs

· 阅读需 2 分钟
Thang Le
Senior Lead Engineer

Ray Tracing

The Shadow of the Next Gen

Ray tracing (RT) offers unparalleled realism, especially in horror where shadows are a primary narrative tool. However, the performance cost is often prohibitive for players without high-end RTX hardware. At Wave0084, we've developed a Hybrid Shadow System that brings the benefits of ray-traced shadows to a wider audience.

The Hybrid Approach

The core idea is simple: only use ray tracing where it matters most. For distant objects or subtle secondary shadows, we stick with traditional shadow maps. For hero objects and close-range shadows, we enable RT.

In Unity's High Definition Render Pipeline (HDRP), we use Ray Tracing Quality Levels.

  1. Denoising: The most expensive part of RT is often the denoising pass. By using a more aggressive, lower-resolution denoiser, we can significantly reduce the GPU load while maintaining the "soft" look of ray-traced shadows.
  2. Max Ray Length: By capping the distance a ray can travel, we prevent the GPU from calculating shadows for objects far outside the player's immediate focus.

Resolution Scaling and STP

We also leverage Spatial Temporal Post-Processing (STP). By rendering the ray-traced shadow pass at half the native resolution and then using STP to upsample and sharpen the result, we can achieve nearly identical visual quality at a fraction of the cost.

Dynamic LOD for RT

Not every object needs ray-traced shadows all the time. We implemented a C# system that dynamically toggles the Ray Tracing flag on mesh renderers based on:

  • Distance from Camera: Only objects within 10 meters receive RT shadows.
  • Light Intensity: If a light source is too dim for shadows to be clearly visible, we fall back to shadow maps.

These optimizations allow us to support ray tracing as an "Ultra" setting that is actually playable on mid-range hardware (like an RTX 3060), rather than it being a feature only accessible to a tiny fraction of our player base.

Creating 'Organic' Inventory Systems in C#

· 阅读需 2 分钟
Thang Le
Senior Lead Engineer

Inventory System

Breaking the Fourth Wall

In horror, immersion is everything. The moment a player opens a flat, 2D menu that pauses the game, the tension is broken. They are reminded that they are playing a game. To solve this, many modern horror titles (most famously Dead Space) use Diegetic User Interfaces—UIs that exist within the world of the game.

Architecting a Diegetic Inventory

Building an "organic" inventory system in C# requires a different approach than a standard HUD. Instead of a Canvas-based UI, we use 3D objects and World-Space Canvases attached to the player's character or a handheld device.

public class DiegeticInventory : MonoBehaviour {
[SerializeField] private Transform _uiHologramAnchor;
[SerializeField] private InventoryData _data;

public void ToggleInventory() {
// Logic to animate the hologram in/out
// Ensure the game does NOT pause!
}
}

The Challenge of Real-Time Interaction

When the game doesn't pause, the inventory becomes a source of tension. The player must manage their items while potentially being hunted. This requires:

  1. Streamlined UX: Actions like "Quick Heal" or "Reload" must be intuitive so the player doesn't fumble while panicked.
  2. Physical Presence: The inventory "screen" should cast light on the player's face and the environment, reinforcing its place in the world.
  3. Animation Integration: The player character should look down at their device or backpack, creating a visual cue for the player's vulnerability.

C# Best Practices: The Data-Driven Approach

To keep the system performant, we use ScriptableObjects for item data and a Messenger/Observer pattern to update the UI. When an item is added to the InventoryData ScriptableObject, it fires an event that the DiegeticInventory listens for, updating the 3D representation without needing to poll every frame.

By removing the "safety" of a paused menu and integrating the inventory into the game world, you force the player to stay engaged with the horror even when they are just trying to manage their resources. It's a small change that has a massive impact on the overall feel of the game.

Photogrammetry for Horror Environments

· 阅读需 2 分钟
Thang Le
Senior Lead Engineer

Photogrammetry

The Texture of Decay

Horror lives in the details: the peeling wallpaper of an abandoned asylum, the rusted hinges of a cellar door, the moss-covered stone of a graveyard. Hand-authoring these "messy" textures can be incredibly time-consuming. This is why Photogrammetry—the process of creating 3D meshes and textures from photographs—has become a cornerstone of our environment pipeline at Wave0084.

Capturing the Real World

The process begins with a high-resolution camera and a lot of patience. We look for real-world locations that exhibit the kind of "natural chaos" that is hard to simulate. Using tools like RealityCapture or Metashape, we process hundreds of photos into a high-poly mesh.

For horror, the "imperfections" are what we're after. The way a piece of wood has rotted or the specific pattern of a blood-like stain on concrete. Photogrammetry captures these nuances with a level of fidelity that "clean" procedural textures often lack.

The Game-Ready Pipeline

A raw photogrammetry scan is millions of polygons—far too heavy for real-time rendering. Our pipeline involves:

  1. Retopology: Creating a clean, low-poly version of the scan.
  2. Baking: Transferring the high-poly detail onto the low-poly mesh via Normal and Displacement maps.
  3. Delighting: This is the most critical step. Photos contain baked-in lighting information. We use tools like Unity's ArtEngine or Adobe Sampler to remove this lighting, resulting in "PBR-ready" Albedo maps that react correctly to our game's dynamic lights.

Integrating with URP

In Unity's Universal Render Pipeline, these high-fidelity assets shine when combined with Detail Maps and Decals. By layering a photogrammetric "base" with procedural "grime" decals, we can create environments that feel unique and grounded in reality.

Photogrammetry isn't about replacing artists; it's about giving them a more realistic starting point. It allows us to spend less time on the "grunt work" of modeling bricks and more time on the "art" of lighting and atmosphere.

Advanced Decal Systems for Bloody Footprints

· 阅读需 2 分钟
Thang Le
Senior Lead Engineer

Decals

The Trail of Terror

Bloody footprints are a staple of horror storytelling. They guide the player, build tension, and tell a silent story of a struggle. However, in a game where both the player and the monsters can leave dynamic trails, managing thousands of decals can quickly become a performance nightmare.

The Problem with Standard Decals

Unity's standard Decal Projector is powerful but can be expensive when used in high quantities. Each projector is essentially a specialized camera pass. If you have 500 bloody footprints in a hallway, you're looking at a significant hit to your draw calls and GPU fill rate.

The Solution: GPU-Driven Decal Atlasing

For Lil Sis, we developed a custom decal system that bypasses the standard projectors for small, repetitive details like footprints.

  1. The Atlas: We store all our blood variations in a single high-res texture atlas.
  2. The Mesh Batcher: Instead of a "Projector," we use a single mesh that is generated on the fly. When a footprint is created, we add its vertices to a Dynamic Mesh. This allows us to render hundreds of footprints in a Single Draw Call.
  3. Projector-Less Projection: We use a custom shader that performs the projection logic in the vertex shader. By passing the ground's normal and position data, we can "shrink-wrap" the footprint mesh to the surface.

Fade-out and Memory Management

To prevent memory leaks, we use a Circular Buffer for our decals. When the buffer is full, the oldest footprint simply fades out and its vertices are repurposed for the newest one.

// In the decal shader
float alpha = smoothstep(_MaxLifetime, _MinLifetime, v.age);
col.a *= alpha;

This system allows us to have rooms covered in blood and footprints without the player ever noticing a dip in performance. It's a perfect example of how "technical tricks" can be used to support "narrative atmosphere."

Phasmophobia: The Power of Voice Recognition

· 阅读需 2 分钟
Hoang Nguyen
Creative Director

Phasmophobia Voice

"Give us a sign."

When Phasmophobia exploded in popularity, it wasn't just because of its co-op ghost hunting. It was because it broke the barrier between the player and the game world using Voice Recognition. By allowing players to speak directly to the entities—and having those entities respond—Kinetic Games created a level of intimacy and terror that buttons and menus could never achieve.

The Mechanic of Presence

In most games, the player is a silent observer. In Phasmophobia, your voice is a beacon. The game uses a "speech-to-text" engine (like Windows Speech Recognition or Google Cloud Speech-to-Text) to listen for specific keywords: "Where are you?", "Are you old?", "Show yourself."

This creates a powerful psychological effect. By forcing the player to speak out loud, the game makes them feel more "present" in the haunted space. It's a form of Role-Playing that is enforced by the game's mechanics. When you're whispering in your dark room and the ghost suddenly responds by throwing a plate, the boundary between reality and the game blurs.

Technical Hurdles: Noise and Privacy

Implementing voice recognition in a horror game comes with significant challenges:

  1. Noise Filtering: The system must distinguish between the player's voice and background noise (or the voices of other players in the room).
  2. Latency: The response from the ghost must be near-instantaneous for the interaction to feel real.
  3. Platform Support: Speech-to-text APIs vary wildly between Windows, consoles, and mobile.

Beyond the Spirit Box

The true genius of Phasmophobia is that the ghost is always listening, even when you aren't using the Spirit Box. If you scream in panic, the ghost is more likely to hunt you. This turns the player's own fear response into a gameplay disadvantage.

For indie devs, Phasmophobia is a reminder that the most immersive hardware we have is the one we've had all along: the player's own voice. By integrating it into the core loop, you create a unique, personal horror experience that feels alive.

Procedural Interior Generation for Infinite Horrors

· 阅读需 2 分钟
Thang Le
Senior Lead Engineer

Procedural Interiors

The Ever-Shifting Labyrinth

Procedural Generation (ProcGen) is often associated with vast open worlds or roguelikes. However, in horror, ProcGen can be a powerful tool for creating a sense of "unreliable reality." If the layout of the haunted house changes every time the player enters, they can never feel truly safe. The challenge is making these procedural layouts feel believable and atmospheric, rather than just a collection of random rooms.

The Constraint-Based Approach

For Lil Sis's "Dreamscape" sequences, we use a Constraint-Based Layout Generator. Instead of purely random placement, we define a set of architectural rules:

  • Bathrooms must be adjacent to bedrooms.
  • Hallways must eventually lead to a "hub" area.
  • Escape routes must always be at least two rooms away from the spawn point.

We use a Wave Function Collapse (WFC) algorithm modified for 3D space. WFC ensures that every piece placed (a door, a window, a corner) is logically compatible with its neighbors, preventing the "floating doors" and "hallways to nowhere" common in simpler generators.

Procedural Lighting and Atmosphere

A room is just a box until it's lit. In a procedural system, we can't hand-place every light. We use a Procedural Light Probe system that analyzes the generated room's volume and places light sources based on its "purpose" (e.g., a flickering fluorescent light in a hallway, a single lamp in a bedroom).

public void GenerateRoomLighting(Room room) {
// Logic to identify key 'mood' points
// and instantiate light prefabs with randomized flickers.
}

The Uncanny Valley of Architecture

The goal of procedural horror interiors is to create something that looks "almost" right. By subtly breaking the rules of architecture—making a hallway slightly too long or a door slightly too small—you trigger the "Uncanny Valley" response in the player. They know something is wrong, even if they can't quite put their finger on it. This architectural gaslighting is a unique strength of procedural systems in the horror genre.