Integrating with A Million Ads

Integrating with A Million Ads is simple as we follow standard HTTP, REST, VAST and DAAST standards. VAST, in particular, is wide spread amongst most ad tech providers and governs the kind of request that we take and the format of the response.


The tag

We provide the tag within our Studio ad designer tool when a script is ready to be published.

Here is a sample tag that can be inserted in to the creative flight on an ad server / SSP / DSP (usually in the VAST redirect box):

GET${segment}&data.age=${ageband} Link

This tag can be a HTTPS GET or POST.

The unique code (wR3Ckk) is the reference to the script - in this case, it is a simple 15 second test message.

The source is set to ama - this lets our system know who is requesting the ad, what format the data is being passed to us in and in what format we will return the response. This is all set up in a pre-defined config file called a parser.

Data can be passed to trigger different elements in the script. This can be done as key=value in the query string, JSON formatted in the data field in the query string, or as XML or JSON in the POST body. In this example, the key value pairs data.segment and data.ageband are in the query string and the values are example macro codes that might trigger the DSP to populate some date into the tag on each request.

The response output can be VAST (IAB standard XML), JSON, or even a 303 redirect to the file itself.  This tag responds with a VAST document containing the URL of the media asset (audio file encoded as required e.g. OGG), any impression, start and complete tags, third party trackers, and any associated companion image and companion click (not all are in this particular response).

Companion images and clicks are supported and they can change dynamically in tandem with the audio - that is all set up in the Studio ad designer tool. We can also insert any number of tracking pixels into the VAST response and, again, can fire different trackers for different creatives.

This being a redirect tag, the response could be different for every request.


For DSPs to support the dynamic functionality, several features are required:

  • VAST redirect support i.e. firing our tag, and then following the directions of the VAST response that we provide, most importantly where to find the media asset.
  • Client header information needs to be passed to us with each request. At minimum this is IP address, Device ID (on mobile) and User Agent of the listener's device. Normally this is passed (or proxied) in the HTTP headers, although can be appended to the tag using DSP-specific macros.
  • No caching of responses or tags. As this is a dynamic tag, all of the components of the response could be different for each request, so caching does not work.
 We know a test is working when the requests and impressions are spread across a region, indicating that real users are generating them.

We know a test is working when the requests and impressions are spread across a region, indicating that real users are generating them.


When we integrate with a new DSP, SSP or ad server we run a series of tests to check that dynamic audio is fully operational:

  1. Environment: you call our dynamic creative server from a test stream and we respond with an audio ad that tells you everything we know about you and the call: time, location, device type, number of impressions. This primarily checks the the User's IP and User Agent are being correctly passed in the request header, and that impression pings are being fired.
  2. Parameter passing: you pass us a set of parameters that are available at your end via macros. Again in a test stream and we mirror them back to you in the audio e.g. Segment, Gender, Genre. This checks that macros can be created and populated in the tag.
  3. Scale: we create some filler audio that you place in remnant/unsold inventory on a live stream so that we can test that calls work at scale and that our numbers line up.

We have different tags for each of these tests and can work with you at each stage to ratify the test and debug as appropriate. It is possible to roll all of the tests into one, depending on timing and how confident we are feeling - we've done a few of these now so we know pretty quickly when it is working and where the usual pitfalls are.

What just happened? Part 3

Analytics is a huge topic. So much so that we have broken this blog post into three.

  1. Introduction
  2. Analytics overview
  3. Individual script delivery (this post)

Individual script delivery

Below the overall campaign data, the analytics page reports individual script delivery. You can choose which script from the drop down menu. The panels show the Impressions / Complete / LTR and CTR data for that script, as the overall campaign data above did.


Next to that is a breakdown of the Top 3 operating systems (OS Types) and Device Types that have requested this script. 

The OS options include Android, iOS, Windows, Mac OS, Linux and Other (which accounts for any other operating system that we do not recognise or device / app that does not report its OS). Device types include Mobile, Tablet, Desktop, Appliance (such as home speakers or TVs) and, again, Other (for devices that we don't recognise or don't report).

Traffic breakdown: the Flow diagram

Depending on the complexity of a script, there can be thousands, if not millions of different versions of the audio. Showing all of these potential versions in one comprehensible way is difficult so we designed the flow diagram to try to show the many different routes through a script.

The flow diagram goes from left to right: zero seconds on the far left through to 30 seconds (or however long the script is) on the right. The width of each line represents the relative proportion of impressions flowing through each branch of the script.

Use the Zoom tool to see the whole chart, or focus in on the area you are interested in.

Rolling over a line in the chart shows the number and proportion of impressions that flowed through that line (101.5k (22.2%) impressions for the line "Get down to your local B&Q" as shown in the diagram above).

The rules that have been set in the script dictate how the impressions flow through that script. These are shown in the bar above the flow line. Click on a branch, or the rule at the top to see further detail for that rule.

Flow diagram zoom in

The overview charts that pop up on the right will depend on the type of rule.

 Overview for location rule

Overview for location rule

 Overview for weather rule

Overview for weather rule

 Overview for random rule

Overview for random rule

These diagrams will depict the data that was served across your selected time frame, allowing you to see which data has been used through the script. These charts include Ignored and Default:

Default: Using weather as an example, where there are four possible choices (Sun, Rain, Cloud, Snow), if the script only contains lines for, say, Sun and Cloud, and a default line for the other conditions, then the chart will only show Sun, Cloud and Default (even though the default line might have been served to users whose weather condition was Rain or Snow).

Ignored shows that data was ignored to choose this route through the script. See the blog post on ignoring here.

Below the charts is a data table that can be sorted by number of impressions.


The blue star shows which line is the default.


  • Measure the default condition separately from the rules you have created by duplicating a line to serve as a default setting, and using the same audio.


What just happened? Part 2

This is part 2 of the Analytics blog series. The other posts are here:

  1. Introduction
  2. Analytics overview (this post)
  3. Individual script delivery

Analytics overview

On the main analytics page, the top bar shows campaign delivery: the overall delivery of all scripts in the campaign.

From left to right:

  1. The first number shows the number of impressions served and the blue wheel around that number indicates how far through the total that is, which is the number shown by Complete.
  2. Unique is the total number of impressions from unique identifiers that we have seen for this campaign (which can be made up of many scripts) divided by the total number of impressions. 34.4% means that each user has heard this campaign just under 3 times each.
  3. Listen Through Rate (LTR) is the total number of End tracking pings we receive divided by the number of Start pings.
  4. Click Through Rate (CTR) is the total number of clicks divided by the total number of impressions.
  5. The impression targets delivery chart shows how the campaign is delivering over time compared to the average number of impressions per day (calculated by dividing the total number of impression in the campaign by the total number of days between the campaign start and end dates). This chart is useful to quickly see if a campaign is over or under-delivering. Note, the delivery chart is not shown after the campaign end date.

The next chart shows how the different scripts in the campaign have delivered over the selected time.


If you have multiple live scripts in a campaign or have re-published scripts within a campaign (either under the same publication key or multiple publication keys) then you can see how each of those scripts has contributed to the overall delivery.

You can choose to show delivery by Campaign, Week, Day or Hour until the present moment.


The next section of data is all based on the script that you choose from this campaign.

By default, the script that has delivered the most impressions in this campaign is selected.

Our next blog post covers how we provide analytics for an individual in more detail.

The analytics story continues over at part 3, here.


What just happened? Part 1

Analytics is a huge topic. So much so that we have broken this blog post into three.

  1. Introduction (this post)
  2. Analytics overview
  3. Individual script delivery

Analytics Part 1: Introduction

We provide a comprehensive set of analytics tools to show you exactly how a campaign is being delivered from the moment it starts. The general principal is that the our analytics shows delivery - how impressions for that campaign or script have been served. Today we don't measure anything other than delivery as the only metrics we have are impressions (and a few clicks) - see audio measurement side bar.

There are three places to get to the data:

1. Dashboard. The dashboard card for each live script shows how many impressions have been served relative to the total impressions target and provides a quick reference impressions chart as a shadow in the background of the card.


2. The Analytics button from the main toolbar. This takes you to the analytics overview page that shows the delivery of all of the scripts currently running that your user group has access to.


Each frame contains a chart that shows delivery of impressions over time, total impressions served relative to the total impressions target of that campaign and the proportion of unique users.

Clicking the blue chart icon will take you to the individual page for that script.

3. Script icons.


Clicking the chart icon from anywhere in the interface will take you to the analytics view for that campaign or script. If a script has not been published or no impressions have been tracked then the analytics page will be blank or unavailable.

Key settings

The analytics system uses the campaign start and end dates and the impressions totals for each script.


Note: these settings in our system are purely to power the analytics: A Million Ads does not control campaign delivery. Campaign start and end dates, number of impressions, frequency capping, front/back loading, targeting and segmenting is all done at the ad server / SSP / DSP level - we simply obey the requests that come our way.

Many of the charts here will be replicating charts that you can find in other systems. We hope that our eye for nice design and usability make is preferable to use our tools over others you may have access to.

Audio measurement side bar

Audio is a hard medium to measure because it is typically consumed in a passive way: on the radio in the corner of the room, in the car, on your headphones plugged in to your mobile ... that is in your pocket. So, unlike other digital media, there are very few signals from users to measure performance (video has view-through-rate, display has click-through, desktop has cookies for attribution etc). Some audio ads are delivered with companion images that can be clicked on, and the click can be measured, but click have been widely discredited as useless, as they don't stay on beyond the audio, if there is an exit or cancel button, most clicks are false positives as people go for the cancel button and miss! We track impressions, clicks, when the ad starts playing and when playback is complete, and we display these data points in the analytics page, but it all comes with this "health warning"!

The future of audio measurement

In time, voice activation will prove to be a useful interface for users to interact with ads, but for the moment, impressions is all we have.

This is just the start of the post on analytics. Continue the story in part 2, here.

Riding high

Overrides gives manual control over which scripts is played

The basis of the override feature is to allow users to manually set a piece of data in a script, effectively overriding any other data that may be around. Here is what the settings screen looks like for a script we ran recently:


We have defined a custom data field called "Walkers" and told the script to change based on the number e.g. "Over 70,000 walkers are taking part ..." is the default, but with the override set as per the screen shot, the script changes to "Over 80,000 walkers are taking part".

You can use this for things like changing a product (for example: 3 products are on sale, but one has just sold out!), price (dynamic pricing based on demand) or prize fund (this week's lottery jackpot amount).

The settings page above can be made available to the client or agency themselves, giving them real control over their creative and is much simpler than making an API connection ... once the save button is pressed the next ads served will reflect the change.

Near, far, wherever you are

How we work with location.

Our individual location is a very powerful signal to use for personalisation. We all feel connected to where we live, where we work, where we grew up, and ads that can smartly use our location have the opportunity to really connect with us. In its most basic form, an ad that simply mentions which city or town we are closest to can feel more familiar to us. How about the ad that mentions a nearby landmark that everybody knows: your closest train station, monument, park or motorway?

To give audio producers and creatives the flexibility to use any of these techniques we have built location in to our Studio as one of the rules.


When you select this rule, you can choose a centre point and a radius within which to locate each user. Use the search box over the map to quickly find places using Google Maps, and we automatically suggest a radius based on the city boundaries.


To consider whether a user is within this radius, we have to consider the accuracy of the user's location.

Location accuracy

We get approximate location of the user from their IP address (the unique address that each internet-connected device is given) that discloses the rough location of that device, along with an accuracy score of how confident we are about that location. This is called the accuracy radius (lower is more accurate).

So, to consider the user to be within a location, we take the distance between the location's centre point and the user's centre point and add the accuracy radius. If that is less than the location's radius, then we consider this user within that location.

The illustration below shows this. Nottingham is the location's centre point with a location radius of 12km. Only the Yellow point is matched because both it's centre point and it's accuracy radius is within Nottingham's area.


We check each user against each location in a script using the above calculation and, if the user is not within any of the locations, we return the default. If the user can be located in more than one radius then we pick the closest distance. If the distance is exactly the same, we pick the smallest radius.

When we look at the accuracy radius of a sample set of IP addresses for impressions served in the UK, we can plot the proportion of those IPs and the accuracy. As the chart below shows, approximately 55% of users have an accuracy radius of 20km or less.


Radii can overlap or be concentric to create nice effects like a waterfall of city to region to country.

For script writers, we recommend not trying to get too micro with location - city level is best. This can be great for a retailer whose has one or two stores per city - see the map below that shows the centre points and radii of a well known UK store chain.


This way of working with location inherently means that there are some users on some connections whose location we just don't know, or their accuracy is so low that it is useless. In these instances we revert to the default.

In the future more devices will report their location more accurately using the GPS or aGPS of their device and we will be able to achieve more creative executions, such as navigation and location history.

There are some great tools online to help understand location and convert between post codes, addresses and lat,long coordinates. We really like BatchGeo, Doogal and Batch Postcode Finder.

Greater than, less than

Rules are processed in order, from the top down.


In an option block you can up to four rules that all have to be true for that element to be chosen. For example, in the image above, the rules are Day of Week, Weather and Impression.

We process the option block from the top down and stop processing as soon as we find an element where all rules are true. So, for the "Its another Monday morning" line to be chosen in the above example, Day of Week would need to equal Monday, Weather equal Cloud and Impression less than 2. If these rules are all true, then this line will be chosen and we will not continue down the list. If any of the rules are not true then we move on to the next element in the list ("Its a sunny Monday afternoon").

For numerical and date rules, this can have a neat outcome by using less than or greater than rules.

Example 1: Age

If you have messages for listeners of different ages, then you can use the less than rule to always evaluate the first in the list. As the table below shows, for ages bands less than 18, less than 30, less than 50 and default, the yellow marker shows which message would be returned for different user ages.

Age rule Age = 17 Age = 27 Age = 51
> 18 Yes No No
> 30 Yes Yes No
> 50 Yes Yes No
Default Yes

Where Age = 17, the rule is true, but we pick the first in the list, so the <18 message is returned.

Example 2: Date

You may have time bound messages, for example over the Christmas period where you need different messages up to Christmas Day, then up to New Years Eve, then after 1 Jan. The table below shows how that would be evaluated, again, taking the first "Yes" in the list as the message returned.

Date rule 21 Dec 28 Dec 4 Jan
Before 25 Dec Yes No No
Before 31 Dec Yes Yes No
Default Yes

Are you ignoring me?

Ignore rules to reduce the number of options.


Sometimes when writing a script you want to have a specific line for one option only but not the others. For example, if you have a line that changes by day part (morning, afternoon or evening) and you want to talk about the weather only in the morning line, the Ignore check box lets you avoid having to create lines for every possible combination.

Without ignore, you would need to have 12 lines in the script:

Snow Rain Cloud Sun
Morning 1 2 3 4
Afternoon 5 6 7 8
Evening 9 10 11 12

With ignore, you can ignore weather for some of the day parts (for example Afternoon and Evening) and so end up with 6 lines:

Snow Rain Cloud Sun
Morning 1 2 3 4
Afternoon 5 (with Weather set to ignore)
Evening 6 (with Weather set to ignore)

You could take this even further if you only wanted to call out a snowy morning and not mention the weather in any other morning line.

Snow Rain Cloud Sun
Morning 1 2 (with Weather set to Ignore)
Afternoon 3 (with Weather set to Ignore)
Evening 4 (with Weather set to Ignore)

And as long as the Morning - Snow line (1) comes before the Morning / Ignore line (2) then a snowy morning will trigger that line. This is because we process rules from the top down and stop at the first true rule. (See the Greater Than post for more)