← Back to Projects

Soma26 - Brain Stimulation Through Music

Cross-platform generative music application powered by a Rust audio engine, creating infinite adaptive soundscapes for focus, relaxation, and cognitive enhancement.

·completed

Soma26 - Brain Stimulation Through Music

An innovative cross-platform application that generates infinite, adaptive music designed to enhance cognitive performance, focus, and relaxation. Built with a custom Rust audio engine that runs seamlessly across web, mobile, and desktop platforms.

The Vision

Traditional music listening offers fixed compositions that eventually become familiar and lose their effectiveness for cognitive tasks. Soma26 solves this by generating infinite, never-repeating musical patterns that maintain their effectiveness for brain stimulation over extended periods.

The application combines principles from neuroscience, generative music, and modern software engineering to create an experience that adapts to your needs.

Core Architecture

Rust Audio Engine

The heart of Soma26 is a high-performance audio engine written in Rust:

// Core audio generation engine
pub struct SomaEngine {
    sample_rate: u32,
    generators: Vec<Box<dyn AudioGenerator>>,
    effects_chain: EffectsChain,
    state: EngineState,
}

impl SomaEngine {
    pub fn generate_frame(&mut self, buffer: &mut [f32]) {
        // Generate audio samples
        for generator in &mut self.generators {
            generator.process(buffer, &self.state);
        }

        // Apply effects chain
        self.effects_chain.process(buffer);

        // Update engine state for next frame
        self.state.advance();
    }

    pub fn set_scene(&mut self, scene: Scene) {
        // Update generators based on scene parameters
        self.reconfigure_for_scene(scene);
    }
}

Cross-Platform Compilation

The Rust engine compiles to multiple targets:

WebAssembly (Browser)

[lib]
crate-type = ["cdylib"]

[target.wasm32-unknown-unknown]

iOS

cargo build --target aarch64-apple-ios --release

macOS

cargo build --target x86_64-apple-darwin --release
cargo build --target aarch64-apple-darwin --release

Android

cargo build --target aarch64-linux-android --release
cargo build --target armv7-linux-androideabi --release

Windows

cargo build --target x86_64-pc-windows-msvc --release

Technical Features

1. Infinite Music Generation

The engine creates never-repeating soundscapes using:

  • Generative Algorithms: Probabilistic music generation based on music theory
  • Layered Synthesis: Multiple sound layers that evolve independently
  • Adaptive Tempo: BPM adjusts based on selected scene and user state
  • Harmonic Constraints: Ensures pleasant-sounding combinations
pub struct MusicGenerator {
    scales: Vec<Scale>,
    chord_progressions: Vec<ChordProgression>,
    rhythm_engine: RhythmEngine,
    melody_generator: MelodyGenerator,
}

impl MusicGenerator {
    fn generate_next_phrase(&mut self) -> Phrase {
        let key = self.select_key();
        let chord = self.progression.next_chord();
        let rhythm = self.rhythm_engine.generate_pattern();
        let melody = self.melody_generator.generate(key, chord, rhythm);

        Phrase {
            notes: melody,
            duration: rhythm.total_duration(),
            key,
            chord,
        }
    }
}

2. Scene System

Pre-configured audio environments for different use cases:

// Next.js frontend scene management
interface Scene {
  id: string;
  name: string;
  description: string;
  parameters: {
    tempo: number; // 60-180 BPM
    complexity: number; // 0.0-1.0
    brightness: number; // 0.0-1.0 (spectral centroid)
    depth: number; // 0.0-1.0 (reverb/spatial)
    energy: number; // 0.0-1.0
  };
  colors: {
    primary: string;
    secondary: string;
    background: string;
  };
}

const SCENES: Scene[] = [
  {
    id: "focus",
    name: "Deep Focus",
    description: "Minimize distractions, maximize concentration",
    parameters: {
      tempo: 80,
      complexity: 0.3,
      brightness: 0.4,
      depth: 0.7,
      energy: 0.5,
    },
    colors: {
      primary: "#6366f1", // Indigo
      secondary: "#8b5cf6", // Purple
      background: "#0f172a", // Dark slate
    },
  },
  {
    id: "relax",
    name: "Relaxation",
    description: "Calm your mind, reduce stress",
    parameters: {
      tempo: 60,
      complexity: 0.2,
      brightness: 0.3,
      depth: 0.9,
      energy: 0.2,
    },
    colors: {
      primary: "#06b6d4", // Cyan
      secondary: "#0891b2", // Teal
      background: "#0c4a6e", // Dark blue
    },
  },
  {
    id: "energy",
    name: "Energy Boost",
    description: "Increase alertness and motivation",
    parameters: {
      tempo: 140,
      complexity: 0.7,
      brightness: 0.8,
      depth: 0.4,
      energy: 0.9,
    },
    colors: {
      primary: "#f97316", // Orange
      secondary: "#ea580c", // Dark orange
      background: "#7c2d12", // Dark red
    },
  },
];

3. WebAssembly Integration

Bridging Rust engine with Next.js frontend:

// wasm-bindings.ts
import init, { SomaEngine } from "@/wasm/soma_engine";

class AudioEngineWrapper {
  private engine: SomaEngine | null = null;
  private audioContext: AudioContext | null = null;
  private processorNode: AudioWorkletNode | null = null;

  async initialize() {
    // Initialize WASM module
    await init();

    // Create audio context
    this.audioContext = new AudioContext({ sampleRate: 48000 });

    // Load audio worklet
    await this.audioContext.audioWorklet.addModule("/audio-processor.js");

    // Create processor node
    this.processorNode = new AudioWorkletNode(
      this.audioContext,
      "soma-processor",
    );

    // Initialize Rust engine
    this.engine = new SomaEngine(this.audioContext.sampleRate);

    // Connect to output
    this.processorNode.connect(this.audioContext.destination);
  }

  setScene(sceneId: string) {
    if (this.engine) {
      this.engine.set_scene(sceneId);
    }
  }

  start() {
    if (this.audioContext?.state === "suspended") {
      this.audioContext.resume();
    }
  }

  stop() {
    if (this.audioContext?.state === "running") {
      this.audioContext.suspend();
    }
  }
}

export const audioEngine = new AudioEngineWrapper();

4. Audio Worklet for Real-Time Processing

// public/audio-processor.js
class SomaProcessor extends AudioWorkletProcessor {
  constructor() {
    super();
    this.bufferSize = 512;
    this.buffer = new Float32Array(this.bufferSize);
    this.bufferIndex = 0;

    this.port.onmessage = (e) => {
      if (e.data.type === "generate") {
        // Receive generated audio from Rust engine
        this.buffer = new Float32Array(e.data.samples);
        this.bufferIndex = 0;
      }
    };
  }

  process(inputs, outputs, parameters) {
    const output = outputs[0];
    const channel = output[0];

    for (let i = 0; i < channel.length; i++) {
      if (this.bufferIndex >= this.buffer.length) {
        // Request more samples from Rust engine
        this.port.postMessage({ type: "needSamples" });
        this.bufferIndex = 0;
      }

      channel[i] = this.buffer[this.bufferIndex++];
    }

    return true;
  }
}

registerProcessor("soma-processor", SomaProcessor);

Frontend Architecture

Next.js Application

Modern React architecture with server components:

// app/page.tsx
'use client';

import { useState, useEffect } from 'react';
import { audioEngine } from '@/lib/audio-engine';
import { SynapticBackground } from '@/components/SynapticBackground';
import { SomaPlayer } from '@/components/SomaPlayer';

export default function HomePage() {
  const [isPlaying, setIsPlaying] = useState(false);
  const [currentScene, setCurrentScene] = useState('focus');
  const [isLoading, setIsLoading] = useState(true);

  useEffect(() => {
    audioEngine.initialize().then(() => {
      setIsLoading(false);
    });
  }, []);

  const handlePlay = () => {
    if (isPlaying) {
      audioEngine.stop();
    } else {
      audioEngine.start();
    }
    setIsPlaying(!isPlaying);
  };

  const handleSceneChange = (sceneId: string) => {
    setCurrentScene(sceneId);
    audioEngine.setScene(sceneId);
  };

  if (isLoading) {
    return <LoadingScreen />;
  }

  return (
    <div className="relative min-h-screen">
      <SynapticBackground scene={currentScene} />
      <SomaPlayer
        isPlaying={isPlaying}
        currentScene={currentScene}
        onPlay={handlePlay}
        onSceneChange={handleSceneChange}
      />
    </div>
  );
}

Interactive Visual Background

Animated canvas that responds to audio:

// components/SynapticBackground.tsx
'use client';

import { useEffect, useRef } from 'react';

interface Particle {
  x: number;
  y: number;
  vx: number;
  vy: number;
  connections: number[];
}

export function SynapticBackground({ scene }: { scene: string }) {
  const canvasRef = useRef<HTMLCanvasElement>(null);
  const particles = useRef<Particle[]>([]);

  useEffect(() => {
    const canvas = canvasRef.current;
    if (!canvas) return;

    const ctx = canvas.getContext('2d')!;
    const resize = () => {
      canvas.width = window.innerWidth;
      canvas.height = window.innerHeight;
    };
    resize();
    window.addEventListener('resize', resize);

    // Initialize particles
    particles.current = Array.from({ length: 100 }, () => ({
      x: Math.random() * canvas.width,
      y: Math.random() * canvas.height,
      vx: (Math.random() - 0.5) * 0.5,
      vy: (Math.random() - 0.5) * 0.5,
      connections: [],
    }));

    // Animation loop
    const animate = () => {
      ctx.fillStyle = 'rgba(15, 23, 42, 0.05)';
      ctx.fillRect(0, 0, canvas.width, canvas.height);

      // Update and draw particles
      particles.current.forEach((p, i) => {
        p.x += p.vx;
        p.y += p.vy;

        // Wrap around edges
        if (p.x < 0) p.x = canvas.width;
        if (p.x > canvas.width) p.x = 0;
        if (p.y < 0) p.y = canvas.height;
        if (p.y > canvas.height) p.y = 0;

        // Draw particle
        ctx.fillStyle = '#6366f1';
        ctx.beginPath();
        ctx.arc(p.x, p.y, 2, 0, Math.PI * 2);
        ctx.fill();

        // Draw connections
        particles.current.forEach((p2, j) => {
          if (i >= j) return;
          const dx = p.x - p2.x;
          const dy = p.y - p2.y;
          const dist = Math.sqrt(dx * dx + dy * dy);

          if (dist < 150) {
            ctx.strokeStyle = `rgba(99, 102, 241, ${1 - dist / 150})`;
            ctx.lineWidth = 0.5;
            ctx.beginPath();
            ctx.moveTo(p.x, p.y);
            ctx.lineTo(p2.x, p2.y);
            ctx.stroke();
          }
        });
      });

      requestAnimationFrame(animate);
    };

    animate();

    return () => window.removeEventListener('resize', resize);
  }, [scene]);

  return (
    <canvas
      ref={canvasRef}
      className="fixed inset-0 -z-10 pointer-events-none bg-slate-950"
    />
  );
}

Progressive Web App

Manifest Configuration

{
  "name": "Soma26",
  "short_name": "Soma",
  "description": "Brain stimulation through music",
  "start_url": "/",
  "display": "standalone",
  "background_color": "#0f172a",
  "theme_color": "#6366f1",
  "icons": [
    {
      "src": "/icon-192.png",
      "sizes": "192x192",
      "type": "image/png"
    },
    {
      "src": "/icon-512.png",
      "sizes": "512x512",
      "type": "image/png"
    }
  ]
}

Service Worker

// public/sw.js
const CACHE_NAME = "soma-v1";
const ASSETS_TO_CACHE = [
  "/",
  "/manifest.json",
  "/audio-processor.js",
  "/_next/static/wasm/soma_engine_bg.wasm",
];

self.addEventListener("install", (event) => {
  event.waitUntil(
    caches.open(CACHE_NAME).then((cache) => {
      return cache.addAll(ASSETS_TO_CACHE);
    }),
  );
});

self.addEventListener("fetch", (event) => {
  event.respondWith(
    caches.match(event.request).then((response) => {
      return response || fetch(event.request);
    }),
  );
});

Performance Optimizations

1. WASM Optimization

Compiled with aggressive optimizations:

[profile.release]
opt-level = 3
lto = true
codegen-units = 1
panic = 'abort'
strip = true

2. Audio Buffer Management

Efficient buffer handling to prevent glitches:

pub struct BufferManager {
    buffers: VecDeque<AudioBuffer>,
    buffer_size: usize,
    prefetch_threshold: usize,
}

impl BufferManager {
    pub fn get_next_buffer(&mut self) -> Option<AudioBuffer> {
        let buffer = self.buffers.pop_front();

        // Prefetch more buffers if running low
        if self.buffers.len() < self.prefetch_threshold {
            self.generate_buffers();
        }

        buffer
    }

    fn generate_buffers(&mut self) {
        // Generate buffers in background
        for _ in 0..5 {
            let buffer = self.engine.generate_buffer(self.buffer_size);
            self.buffers.push_back(buffer);
        }
    }
}

3. Web Performance

  • Code splitting for faster initial load
  • Service Worker for offline functionality
  • Lazy loading of WASM module
  • Optimized Canvas rendering (60 FPS)

Deployment

Vercel Configuration

{
  "buildCommand": "npm run build",
  "outputDirectory": ".next",
  "framework": "nextjs",
  "installCommand": "npm install && npm run build:wasm",
  "headers": [
    {
      "source": "/(.*)",
      "headers": [
        {
          "key": "Cross-Origin-Embedder-Policy",
          "value": "require-corp"
        },
        {
          "key": "Cross-Origin-Opener-Policy",
          "value": "same-origin"
        }
      ]
    }
  ]
}

Build Pipeline

# Build Rust engine for WASM
cargo build --target wasm32-unknown-unknown --release

# Generate JS bindings
wasm-bindgen target/wasm32-unknown-unknown/release/soma_engine.wasm \
  --out-dir public/wasm \
  --target web

# Optimize WASM binary
wasm-opt -O3 public/wasm/soma_engine_bg.wasm \
  -o public/wasm/soma_engine_bg.wasm

# Build Next.js app
npm run build

Use Cases

1. Deep Work Sessions

The "Deep Focus" scene provides steady, non-distracting audio that helps maintain concentration for extended periods:

  • Low complexity to minimize cognitive load
  • Moderate tempo (80 BPM) for sustained attention
  • Gentle evolution to prevent monotony

2. Meditation & Relaxation

The "Relaxation" scene creates a calming environment:

  • Slow tempo (60 BPM) to encourage relaxation
  • High spatial depth for immersive experience
  • Minimal melodic complexity

3. Creative Flow

Custom scenes can be tuned for creative work:

  • Higher complexity for engaging the creative mind
  • Variable tempo to match creative energy
  • Brighter tones for alertness

Technical Challenges & Solutions

Challenge 1: Cross-Platform Audio

Problem: Different platforms handle audio differently.

Solution: Abstracted audio output behind a trait, with platform-specific implementations:

pub trait AudioOutput {
    fn write_samples(&mut self, samples: &[f32]);
    fn get_sample_rate(&self) -> u32;
}

#[cfg(target_arch = "wasm32")]
impl AudioOutput for WebAudioOutput { /* ... */ }

#[cfg(target_os = "ios")]
impl AudioOutput for IOSAudioOutput { /* ... */ }

Challenge 2: Preventing Audio Glitches

Problem: Audio must be generated fast enough to prevent buffer underruns.

Solution: Multi-threaded buffer generation with prefetching:

use rayon::prelude::*;

pub fn generate_buffers_parallel(&self, count: usize) -> Vec<AudioBuffer> {
    (0..count)
        .into_par_iter()
        .map(|_| self.generate_buffer())
        .collect()
}

Challenge 3: WASM Binary Size

Problem: Initial WASM binary was 2.5MB.

Solution: Aggressive optimization and stripping reduced it to 800KB:

  • Removed unused dependencies
  • Enabled LTO and stripping
  • Used wasm-opt for further optimization

Future Enhancements

  • Binaural beats integration
  • User-customizable scenes
  • Session history and analytics
  • Social features (share scenes)
  • Native mobile apps (iOS/Android)
  • Desktop applications (macOS/Windows)
  • Spotify/Apple Music integration
  • ML-powered personalization
  • Collaborative listening rooms
  • API for third-party integrations

Tech Stack

  • Audio Engine: Rust (cross-compiled to WASM/iOS/macOS/Android/Windows)
  • Frontend: Next.js 16, React 19, TypeScript
  • Styling: Tailwind CSS 4
  • Audio API: Web Audio API, Audio Worklet
  • Deployment: Vercel
  • Analytics: Google Tag Manager
  • PWA: Service Workers, Web Manifest

Performance Metrics

  • Initial Load: < 2s on 3G
  • WASM Binary: 800KB (gzipped)
  • Audio Latency: < 10ms
  • Canvas Animation: 60 FPS
  • Memory Usage: < 50MB
  • Offline Support: Full functionality

Soma26 demonstrates the power of Rust for cross-platform audio applications, combining high-performance audio generation with modern web technologies to create an immersive, infinite music experience optimized for cognitive enhancement.