(For more resources on Plone, see here.)
There are at least four use cases when we think of integrating audio in a web application:
- We want to provide an audio database with static files for download.
- We have audio that we want to have streamed to the Internet (for example, as a podcast).
- We want a audio file/live show streamed to the Internet as an Internet radio service.
- We want some sound to be played when the site is loaded or shown.
In this article series, we will discuss three of the four cases. The streaming support is limited to use case 2. We can stream to one client like a podcast does, but not to many clients at once like an Internet Radio does. We need special software such as Icecast or SHOUTcast for this purpose. Further, we will investigate how we solve use cases 1, 2, and 3 with the Plone CMS and extensions. Technically, these are the topics covered in this article series:
- Manipulation of audio content stored as File content in Plone
- The different formats used for the binary storage of audio data
- Storing and accessing MP3 audio metadata with the ID3 tag format
- Managing metadata, formats, and playlists with p4a.ploneaudio in Plone
- Including a custom embedded audio player in Plone
- Using the Flowplayer product to include an audio player standalone in rich text and as a portlet
- Previewing the audio element of HTML5
- Extracting metadata from a FLAC file using mutagen
Uploading audio files with an unmodified Plone installation
The out of the box support of Plone for audio content is limited. What is possible to do is to upload an audio file utilizing the File content type of Plone to the ZODB. A File is nothing more and nothing less than a simple binary file. Plone does not make any difference between a MP3 file and a ZIP, an EXE, or an RPM binary file.
When adding File content to Plone, we need to upload a file (of course!). We don’t necessarily need to specify a title, as the filename is used if the title is omitted. The filename is always taken for the short name (ID) of the object. This limits the number of files with any specific name to one in a container
While uploading a file, Plone tries to recognize the MIME type and the size of the file. This is the smallest subset of information shared by all binary files the content type File was intended for. Normally, detecting the MIME type for standard audio is not a problem if the file extension is correctly set.
Clicking on the link in the default view either downloads the file or opens it with the favorite player of your operating system. This behavior depends on the settings made on the target browser and corresponds with the method 1 of our audio use cases. It goes without saying that we can add the default metadata to files and organize them in folders.
Like Images, File objects do not have a workflow associated in a default Plone setup. They inherit the read and write permissions from the container they are placed into. Still, we can add an existing workflow to this content type or create a new one via the portal_workflow tool if we want.
That’s pretty much it. Fortunately, we can utilize some extensions to enhance the Plone audio story greatly.
What we will see in this article is as follows: First, we will go over some theoretical ground. We will see what formats are available for storing audio content and which is best for which purpose. Later we will investigate the Plone4Artists extension for Plone’s File content type—p4a.ploneaudio. We will talk about metadata especially used for audio content and how to manipulate it. As a concrete example, we will use mutagen to extract metadata from a FLAC file to add FLAC support to p4a.ploneaudio. Finally, we will have a word on streaming audio data through the Web and see how to embed a Flash player into our Plone site. We will see how we can do this programmatically and also with the help of a third-party product called collective.flowplayer. At the very end of the article, we have a small technical preview on HTML5 where a dedicated audio element is available. This element allows us to embed audio directly into our HTML page without the detour with Flash.
Accessing audio content in Plone
Once we upload a file we want to work with to Plone, we will link it with other content and display it in one way or another. There are several ways of accessing audio data in Plone. It can be accessed in the visual editor by editors, in templates by integrators and in Python code by developers.
Unlike for images, there is no special option in the visual editor to embed file/audio content into a page. The only way to access an audio file with Kupu is to use an internal link. The file displays as a normal link and is executed when clicked. Executed means (as for the standalone file) saved or opened with the music player of your operating system as is done in the standard view of the File content type. Of course, it is possible to reference external audio files as well.
Page template access
As there is no special access method in Kupu, there is none in page templates. If we need to access a file there, we can use the absolute_url method of the audio content object. This computes a link we can refer to. So the only way to access a file from another context is to refer to its URL.
Python script access
If we need to access the content of an (audio) file in a Python script, we can get the binary data with the Archetype accessor getFile.
>>> binary = context.getFile()
This method returns the data wrapped into a Zope OFS.File object. To access the raw data as a string, we need to do the following:
>>> rawdata = str(binary.data)
Accessing the raw data of an audio file might be useful if we want to do format transformations on the fly or other direct manipulation of the data.
If we write our own content type and want to save audio data with an object, we need a file field. This field stores the binary data and takes care of communicating with the browser with adequate view and edit widgets. The file field is defined in the Field module of the Archetype product. Additional to the properties, it exclusively defines that it inherits from the ObjectField base class. The following properties are important.
The type property provides a unique name for the field. We usually don’t need to change this. The default property defines the default value for this field. It is normally empty. If we want to change it, we need to specify an instance of the content_class property.
One field of the schema can be marked as primary. This field can be retrieved by the getPrimaryField accessor. When accessing the content object with FTP, the content of the primary field is transmitted to the client.
Like every other field, the file field needs a widget. The standard FileWidget is defined in the Widget module of the Archetypes product.
The content_class property declares the instance, where the actual binary data is stored. As standard, the File class from Zope’s OFS.Image module is used. This class supports chunk-wise transmission of the data with the publisher.
Field can be accessed like any other field by its accessor method. This method is either defined as a property of the field or constructed from its name. If the name were “audio”, the accessor would be getAudio. The accessor is generated from the “get” prefix with the capitalized name of the field.
Before we go on with Plone and see how we can enhance the story of audio processing and manipulate audio data, we will glance at audio formats. We will see how raw audio data is compressed to enable effective audio storage and streaming. We need to have some basic audio know-how about some of the terminology to understand how we can effectively process audio for our own purposes.
As with images, there are several formats in which audio content can be stored. We want to learn a bit of theoretical background. This eases the decision of choosing the right format for our use case.
An analog acoustic signal can be displayed as a wave:
If digitalized, the wave gets approximated by small rectangles below the curve. The more rectangles are used the better is the sound (fidelity) of the digital variant. The width of the rectangles is called the sampling rate.
Usual sampling rates include:
- 44.1 kHz (44,100 samples per second): CD quality
- 32 kHz: Speech
- 14.5 kHz: FM radio bandwidth
- 10 kHz: AM radio
Each sample is stored with a fixed number of bits. This value is called the audio bit depth or bit resolution.
Finally, there is a third value that we already know from the analog side. It is the channel. We have one channel for mono and two channels for stereo. For the digital variant, this means a doubling of data if stereo is used.
So let’s do a calculation. Let’s assume we have an audio podcast with a length of eight minutes, which we want to stream in stereo CD quality. The sampling rate corresponds with the highest frequency of sound that is stored. For accurate reproduction of the original sound, the sample rate has to be at least twice that ofthe highest frequency sound. Most humans cannot hear frequencies higher than 20 kHz. The corresponding sampling rate to 20 kHz is a sampling rate of 44100 samples. We want to use a bit resolution of 16. This is the standard bit depth for audio CDs. Lastly, we have two channels for stereo: 44100 x 16 x 2 x 60 x 8= 677376000 bits = 84672000 bytes ˜ 80.7 MB
This is quite a lot of data for eight minutes of CD-quality sound. We do not want to store so much data and more importantly, we do not want to send so much data over the Internet. So what we do is compress the data. Zipping the data would not give us a big effect because of the binary structure of digital audio data. There are different types of compressions for different types of data. ZIPs are good for text, JPEG is good for images, and MP3 is good for music—but why? Each of these algorithms takes the nature of the data into account. ZIP looks for redundant characters, JPEG unifies similar color areas, and MP3 strips the frequencies humans do not hear from the raw data.