Media Handling

In this section:

Note:

Currently, a Plug-in app cannot directly access the microphone or the camera. It can indirectly access the camera if you create an app that includes both JavaScript and PDK components (see JavaScript and Plug-in Interface). In this case, you can use the Camera API from your JavaScript code to capture an image to the device storage, then load and process the captured image in your Plug-in code. However, you cannot currently access camera data directly from Plug-in code.


Playing Video

Supported video formats:

  • MPEG-4
  • H.263
  • H.264 (baseline profile only)

Using PDL calls to play video

The PDK comes with its own library of calls for playing video -- SDL_cinema. This API is fairly simple--video is decoded to the lower level video layer. You can then write to the application layer's alpha channel to have this show through. See the section on Alpha blending in Display Access for more information about alpha blending and the destination frame buffer's alpha channel.

Note:

The SDL_cinema API works only on the device in full-screen PDK apps. It does not work on the desktop or in hybrid apps. To play video from the JavaScript side of a hybrid app, consult our video documentation.

To have your application play a video:

  1. Clear the screen to expose the video layer:

    //** Set the clear color to set alpha channel to zero
    glClearColor(0, 0, 0, 0);
    
    //** Re-enable alpha, since we disabled it earlier
    glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_FALSE);
    
    //** Clear
    glClear(GL_COLOR_BUFFER_BIT);
    
    //** Swap buffers to put on-screen.
    SDL_GL_SwapBuffers();
    
    

    To have controls appear on top of this UI:

    1. Disable alpha blending.
    2. Create a texture for your overlaid image that has an alpha channel (GL_RGBA format).
    3. Blit (copy) the image over the top of the cleared background.
  2. Initialize cinema:

    if (!CIN_Init())
      printf("Initialization failed!\n");
    
    
  3. Load a file:

    if (!CIN_LoadCIN("file:///var/home/root/PalmPre.mp4"))
      printf("Loading failed\n");
    
    
  4. Play file:

    CIN_Play();
    
    

Playing Audio

Supported audio formats:

  • Ogg Vorbis
  • mp3

Playing application music

If your application plays music, you have the option to make use of the following PDL call:

PDL_Err PDL_NotifyMusicPlaying(PDL_bool MusicPlaying);

If called with PDL_TRUE, any music that the system media player is playing should stop or pause. This avoids having two songs playing at the same time. Other apps that are running also receive this notification, but their compliance is not guaranteed.

Ensuring your audio plays smoothly

There are two basic problems that can occur when playing audio:

  1. Latency--A noticeable delay that occurs between the time an audio is requested to be played and the time it is actually heard.

  2. Stuttering--The audio does not play smoothly, but is played in intermittent bursts that sound like an automatic gun firing--a repeated sequence of sound and quiet.

Ideally, you want your audio to play smoothly, without a noticeable silence before it starts and without stuttering while it plays. At a low level, audio is streamed in large data buffers that are sent to the device audio hardware to play. This data needs to be sent as fast as the hardware can play it. For example, if your app sends a buffer that takes 200ms to play, then it needs to send more data before 200ms has passed or silence occurs. The SDL and other lower-level system processes handle this, but your application sets the buffer size and playing speed. Both of these can matter with respect to latency and stuttering.

The application specifies a sample rate and buffer sample size using the Mix_OpenAudio call. The sample rate is the number of data points played per second. For example, a CD has a standard sample rate of 44,100 per second. The higher the sample rate, the faster the data in the buffer is used. For example, if your sample rate is 22050 samples per second, and your buffer size is 2048 samples, then your buffer is good for about 100ms worth of playback (2048/22050=.0928). If your sample rate is increased to 44100, your buffer is now good for about 50ms of playing time, since it gets consumed twice as fast.

In short, the larger the buffer relative to the sampling rate, the less often the system has to feed the sound hardware. This is good if your app is CPU intensive, since that is less time the CPU has to deal with the audio. So why not just have a huge buffer relative to the sampling speed? Because, when you tell the system to play a sound, it mixes that sound data in with the next buffer it sends to the hardware. The hardware, meanwhile, is continuing to play the last buffer it was sent. If the buffer size is big, there might be a delay before it finishes the current buffer and gets to yours. This is latency-the time between when your app requests to play a sound and when the user actually starts hearing it. In the worst case, the latency is equal to your buffer's play time. Stuttering occurs when your buffer size is too small relative to the sampling rate. In this case, the CPU is delayed from serving the audio request and, consequently, intermittent sound is heard. The more CPU intensive your app is, and the smaller your buffer size, the more stuttering you will hear.

As the application developer, you need to determine the trade-off between latency and stuttering and you need to do this on a per-app basis. If latency is more than you would like, then reduce the buffer size or increase the sampling rate; if you hear stuttering, then increase the buffer size or reduce the sampling rate.

Here is the format of the Mix_OpenAudio call:

int Mix_OpenAudio(int frequency, Uint16 format, int channels, int chunksize)

The frequency is the sampling rate, and chunksize is the buffer sample size. As a suggestion, you might start with 22050 and 2048 as values for these, as they are considered standard values for games.