simple output for software rendering of audio
category: code [glöplog]
Okay, I've never done any proper audio. But making an intro/demo without sound is quite lame, so I'd like to know how to make some simple audio on Windows.
Something like "fill in this buffer and I'll play it" would be good, but I've also heard of these gm.dls and never really found out their true meaning.
Also if you'd like to point out something important about softsynthing in general, go ahead.
Something like "fill in this buffer and I'll play it" would be good, but I've also heard of these gm.dls and never really found out their true meaning.
Also if you'd like to point out something important about softsynthing in general, go ahead.
rudimentary but should fill your purpose:
Code:
LPDIRECTSOUND m_pDS;
LPDIRECTSOUNDBUFFER m_pPrimary;
LPDIRECTSOUNDBUFFER m_pSecondary;
WAVEFORMATEX format={WAVE_FORMAT_PCM, 1, 44100, 44100*2, 2, 16};
DSBUFFERDESC bufferDesc ={sizeof(DSBUFFERDESC), DSBCAPS_PRIMARYBUFFER};
DSBUFFERDESC bufferDesc2={sizeof(DSBUFFERDESC), DSBCAPS_GETCURRENTPOSITION2 | DSBCAPS_GLOBALFOCUS, REALSIZE, NULL, &format, NULL };
LPVOID p1;
DWORD l1;
DirectSoundCreate(0, &m_pDS, 0);
m_pDS->SetCooperativeLevel(hWnd,DSSCL_PRIORITY);
m_pDS->CreateSoundBuffer(&bufferDesc,&m_pPrimary, NULL);
m_pPrimary->SetFormat(&format);
m_pDS->CreateSoundBuffer(&bufferDesc2,&m_pSecondary,NULL);
m_pSecondary->Lock(0,nBufferSize,&p1,&l1,NULL,NULL,NULL);
// fill the buffer here
m_pSecondary->Unlock(p1,l1,NULL,NULL);
m_pSecondary->Play(0,0,0);
looks like the 2 buffers are not related, so what does the primary one do?
The primary one is just for telling DirectSound that you're in face going to output audio in *gasp* CD QUALITY! OMG!
Older versions of Windows/DirectX really rendered stuff in 22Khz or even 11 if you didn't; with XP and above setting the primary buffer format should be pretty much obsolete but you never know
[insert a few rants about how fucking buggy XAudio2 is and how this kinda disables its use in ANY release product although it's almost a good API apart from a few deadly design mistakes that are kinda easy to circumvent tho here]
Older versions of Windows/DirectX really rendered stuff in 22Khz or even 11 if you didn't; with XP and above setting the primary buffer format should be pretty much obsolete but you never know
[insert a few rants about how fucking buggy XAudio2 is and how this kinda disables its use in ANY release product although it's almost a good API apart from a few deadly design mistakes that are kinda easy to circumvent tho here]
Yes, _kb, please insert the rant..
Okay, that seems like a nice way to go about it. But when should I update the buffer? Once every frame, second, when? And how big should the buffer be?
Oh, and if kb has some better way, I'd like to hear about it ;)
By all means keep away from portaudio and sdl_mixer.
I use
Init:
Audio loop:
Init:
Code:
soundrate = samplingfreq;
soundbuffer = new float[4*buflen];
ZeroMemory(soundbuffer, sizeof(float)*4*buflen);
soundbufsize = buflen;
wavehdr.dwFlags = WHDR_BEGINLOOP|WHDR_ENDLOOP;
wavehdr.lpData = (LPSTR)soundbuffer;
wavehdr.dwBufferLength = sizeof(float)*4*buflen;
wavehdr.dwLoops = -1;
WAVEFORMATEX pcmwf;
pcmwf.wFormatTag = WAVE_FORMAT_IEEE_FLOAT;
pcmwf.nChannels = 2;
pcmwf.nSamplesPerSec = samplingfreq;
pcmwf.wBitsPerSample = 32;
pcmwf.nBlockAlign = 8;
pcmwf.nAvgBytesPerSec = 8*samplingfreq;
pcmwf.cbSize = 0;
waveOutOpen(&waveout, 0, &pcmwf, 0, 0, WAVE_MAPPED|WAVE_FORMAT_DIRECT|CALLBACK_NULL);
waveOutPrepareHeader(waveout, &wavehdr, sizeof(WAVEHDR));
waveOutPause(waveout);
waveOutWrite(waveout, &wavehdr, sizeof(WAVEHDR));
Audio loop:
Code:
void SoundThread()
{
static int lastbuf = 1;
static MMTIME mmtime =
{
TIME_SAMPLES,
0
};
waveOutGetPosition(waveout, &mmtime, sizeof(MMTIME));
int curbuf = (mmtime.u.sample/soundbufsize)%2; // currently playing buffer
if(curbuf != lastbuf)
{
float *buffer = &soundbuffer[lastbuf*soundbufsize*2];
for(int i=0; i<soundbufsize; ++i)
{
*buffer++ = 0.0f;
*buffer++ = 0.0f;
}
}
lastbuf = curbuf;
}
I haven't had any problems with PortAudio, although I've never used it for demo's.
that's because you never made demos!
@#ponce: I was planning to use sdl_mixer. Why should I avoid it?
Multiplying a float by 20000 is pretty much all you need for 16bit signed pcm conversion
for instance:
sin(theta) * 20000
gives you some nice data to write to your audio output to produce a sin wave
for instance:
sin(theta) * 20000
gives you some nice data to write to your audio output to produce a sin wave
Oh- and then your limit should be about -1.2f to 1.2f
the way its done in 4klang:
Using the USE_SOUND_THREAD define will call the RenderSound function in an own thread, therefore executing it in parallel to the main loop.
Its a simple way to get the sound rendered in parallel to the main loop but has some drawbacks like you need to make sure your RenderSound function is always filling the buffer faster than its played back :)
You can give the buffer a heads up by putting in a Sleep() command for some seconds after you called the InitSound though.
Br undefining USE_SOUND_THREAD the buffer is filled completely before starting the main loop.
Code:
#include "windows.h"
#include "mmsystem.h"
#include "mmreg.h"
#define USE_SOUND_THREAD
#define SAMPLE_RATE 44100
#define MAX_SAMPLES (SAMPLE_RATE*60*5)
#define FLOAT_32BIT
#ifdef FLOAT_32BIT
#define SAMPLE_TYPE float
#else
#define SAMPLE_TYPE short
#endif
SAMPLE_TYPE lpSoundBuffer[MAX_SAMPLES*2];
HWAVEOUT hWaveOut;
WAVEFORMATEX WaveFMT =
{
#ifdef FLOAT_32BIT
WAVE_FORMAT_IEEE_FLOAT,
#else
WAVE_FORMAT_PCM,
#endif
2, // channels
SAMPLE_RATE, // samples per sec
SAMPLE_RATE*sizeof(SAMPLE_TYPE)*2, // bytes per sec
sizeof(SAMPLE_TYPE)*2, // block alignment;
sizeof(SAMPLE_TYPE)*8, // bits per sample
0 // extension not needed
};
WAVEHDR WaveHDR =
{
(LPSTR)lpSoundBuffer,
MAX_SAMPLES*sizeof(SAMPLE_TYPE)*2,
0,
0,
0,
0,
0,
0
};
void RenderSound(SAMPLE_TYPE* buffer)
{
// fill your sound buffer here
}
void InitSound()
{
#ifdef USE_SOUND_THREAD
CreateThread(0, 0, (LPTHREAD_START_ROUTINE)RenderSound, lpSoundBuffer, 0, 0);
#else
RenderSound(lpSoundBuffer);
#endif
waveOutOpen ( &hWaveOut, WAVE_MAPPER, &WaveFMT, NULL, 0, CALLBACK_NULL );
waveOutPrepareHeader( hWaveOut, &WaveHDR, sizeof(WaveHDR) );
waveOutWrite ( hWaveOut, &WaveHDR, sizeof(WaveHDR) );
}
void main(void)
{
InitSound();
do
{
// main loop goes here
} while (condition);
}
Using the USE_SOUND_THREAD define will call the RenderSound function in an own thread, therefore executing it in parallel to the main loop.
Its a simple way to get the sound rendered in parallel to the main loop but has some drawbacks like you need to make sure your RenderSound function is always filling the buffer faster than its played back :)
You can give the buffer a heads up by putting in a Sleep() command for some seconds after you called the InitSound though.
Br undefining USE_SOUND_THREAD the buffer is filled completely before starting the main loop.
this has full source examples of what you're trying to do, though they're quite messy. Basically it should show you how Gopher's example would fit into a project. :)
shameless plug hehehe
Quote:
I haven't had any problems with PortAudio, although I've never used it for demo's.
The output latency given by Pa_GetStreamInfo is different on Windows and Linux, it also depends on the sound card, which caused us a lot of hassle. BASS gets it right (I hope).
Quote:
@#ponce: I was planning to use sdl_mixer. Why should I avoid it?
SDL_mixer does not give you a simple callback (it owns it) and have a crappy 16-bits mixer. You can also use SDL directly for the callback, but SDL does not output 24-bits audio in version 1.2.
Quote:
What would you expect? The driver/audio subsystem determines the latency, not PortAudio.The output latency given by Pa_GetStreamInfo is different on Windows and Linux
It was not only different but false.
So if sdl and pulseaudio are out, is there a good cross-platform way to do this?
I would say BASS | FMODEX | OpenAL
you mean, like not using #ifdefs?
Btw: Does somebody have a current PulseAudio compile (Win32 dlls, possibly libs)? There's only an old version floating around, and when I looked at the new sources it had a whole shtiload of dependencies. I'm not really keen on compiling it myself...
Btw: Does somebody have a current PulseAudio compile (Win32 dlls, possibly libs)? There's only an old version floating around, and when I looked at the new sources it had a whole shtiload of dependencies. I'm not really keen on compiling it myself...
@#ponce: you know PlaySound is also cross-platform. If he's just looking into filling a wave buffer and playing it, this could also be a nice option.
Beep()