Coverscape

Turning conversation tone into living visuals;

See how it felt, not just how long it was.

Abstract

Research has uncovered the importance of hearing in how we understand each other’s emotions. Properties of sound such as frequency, pitch, volume, duration, and vocal bursts all help us understand how other people feel.

Narrative:

ConverScape is a Ul platform that attempts to visualize the emotions and relationship between two people. We hope that by translating something as complex as conversation and emotion into a visual, there will be less miscommunications, people can fondly reflect on past interactions with family and friends, and potentially difficult conversations can become less daunting. Hence, you have a visual takeaways of your online conversation, one that adds to itself as your relationship continues, giving you a record of your talking time together.

We used Grasshopper to create a system that takes in the human voice and curates a visual representation based on the volume and the pitch of the voice.

We used Grasshopper to create a system that takes in the human voice and curates a visual representation based on the volume and the pitch of the voice.

Design rule: runs passively during a call; review after; you own the data toggle.

Matrix

  1. Input is a live mic feed from a sound sensor in Grasshopper.
  2. We only keep derived signals: loudness, pitch, and noticeable bursts.
  3. Simple mapping: volume lifts the mesh, pitch tightens or loosens it, bursts nudge it.
  4. Kangaroo keeps the mesh stable with basic damping so it doesn’t explode.
  5. Artifacts must read clearly on dark backgrounds and work at contact-card size.
  6. Privacy is user-controlled; raw audio isn’t stored.

Features

  1. Real-time, audio-reactive mesh driven by Kangaroo2 which changes color and line weight.
  2. After each call, a clean snapshot plus basic stats: average pitch, average loudness, burst count, call length.
  3. A growing gallery per contact so relationships build over time.
  4. Simple sensitivity slider to suit different voices.
  5. One-tap export of stills or short clips.
  6. Works in a dark-mode iOS concept UI with quick drill-downs.

Started with simple line traces, iterated into a restrained mesh with physics for clarity and stability. Each pass tightened the mapping so “what you see” cleanly matches “what you heard.”
This flow reflects the incremental logic added over multiple iterations.

Iterative process:

Grasshopper Script

How the script is built:

  1. Capture – Time Period + Start/Stop sound capture nodes stream mic data.
  2.  Smoothed Average samples/ Moving Avg to reduce spikes; short windows for responsiveness.
  3. Feature extraction:
  4. Loudness (peak)
  5. Pitch light peak
  6. Normalize: map features to 0–1 using Bounds → Remap (yellow panels show ranges).
  7. Base mesh: build a grid with Surface → Mesh (U/V counts), Weld + Mesh Angle for clean edges.
  8. Kangaroo setup:
  9. Springs from lines: stiffness/rest length respond to pitch.
  10. Unary Force: vertical impulse scaled by loudness.
  11. Anchors at the frame to keep the form legible.
  12. Solver (K2) with damping + reset button for stability.
  13. Color & thicknessGradient + Remap tie intensity to color/line weight; time sweeps hue for readability in thumbnails.
  14. UI hooks: package the latest frame as the contact-card hero; store the clip + stats into the person’s gallery.

Solution

  • A moving lattice that spikes with louder moments and coils with softer tones.
  • iOS mockups: contact view + gallery of past calls, each artifact a quick read of mood and balance.
  • Storyboards: notice CoverScape → make the call → review the visual and metrics

Impact

  1. A softer way to reflect on conversations with family, partners, or in coaching/therapy—without storing content.
  2. Team use for talk-time balance and energy.
  3. Extendable to voice notes and meetings.

Key Tech Stack