Ok, this page I’m going to try and talk about the creation and implementation of a media player for the university. This page is very much work in progress until I say otherwise! :)
The brief was to investigate Flash Media Interactive and how it could be used to provide media streaming of content stored outside of Blackboard, which could also be used and available elsewhere.
The main challenges at the beginning were:
- Understand the Flash Media Interactive application
- How to protect video content
- Learn Flash and Actionscript
- How individual components would work together
Before I get ahead of myself I should mention there are two versions of the server available, Flash Media Server, and Flash Media Interactive. Flash Media Server provides video streaming, but the Interactive version provides additional features for live streaming, recording, DVR functionality, and more security using plug in architecture.
To begin with it was a case of installing Flash Media Server development onto a test environment to give an area to test and play with the service. Adobe provide a free development trial of the server that allows 10 simultaneous connections.
Working through sample tutorials started to provide a basis for understanding how the folder structures worked on the server, and how to create connections to the server to stream video. Adobe themselves have many documents within their developer area discussing the server, how to protect video, stream video, playlists, dynamic switching. Click here
After investigating different options development of an initial test system began. The top level view is shown below:
To explain the diagram, the flash player interface would establish a connection to the server. When the player attempts a connection a number of variables are passed in a client object. These include IP address, URL, URI, and referrer. These values can then be used to control access to individual videos.
The media server will then return either an accept or reject response to the connection request. If successful then the connection created can be used to initiate a stream for the video.
That is a top level description, now to dig a bit deeper into how this works. First I’ll discuss the technical back end, and then the player interface itself.
Flash Media Interactive – Security
One of the features of Flash media Interactive, and streaming video in general, is to protect video content. This is achieved with the use of streamed video, plugins, and swf verification.
Progressive downloads of video over http utilize caching, normally a video will not start playing until a buffer of downloaded content is available to provide playback. Technologies and applications are freely available to capture this data and save the video, which for a content provider is not acceptable. By streaming directly there is no caching of data, and also allows instant jumping to any point in the video. Streaming can also be encrypted using RTMPE protocol.
These methods, while increasing security of video content between the server and client, is not 100% secure.
Another way to prevent video streams being requested and captured is SWF verification. A copy of the SWF file is held on the server, and when a connection request is received the calling swf and that held on the server are compared. If they are not the same then the connection is rejected. This isn’t 100% reliable either as it is possible to replicate the data passed to the media server.
To validate connection requests to the server for a video there were a couple of places this could be implemented.
- Access plug-in
- Authorization plug-in
- Server Side Actionscript
The access plug-in and authorization plug-ins are c++ created DLL files that sit on the server. The Access plug-in intercepts all connections to the server before they reach Flash Media Interactive. The Authorization plug-in works between a connection being established and before it is accepted. Whereas the Access plug-in is one dll file, you can have multiple Authorization plug-ins to handle different server events.
The third is Server Side Actionscript. This exists in a file, Main.asc, within the Applications directory. This file uses Actionscript and can access the client object passed by a connection request to accept or reject a connection.
application.acceptConnection(p_client);
The Main.asc is on a per application level and so can be customized for individual cases than the Access plug-in. Within the Applications directory a folder is created for each employee who logs in to the CMS interface, and each user directory contains a Main.asc. The directories themselves are based on the employee id as this is the only identifier that does not change.
The Main.asc has been used to capture the client object and then use this to determine wether or not the connection should be accepted or rejected. It passes the details to a php script, which in turn checks a database to see if the video is available, and what access restrictions have been set. If the php process finds a reason the connection should be rejected the Main.asc returns a rejected response to the SWF file.
This leads us on to talking about access restrictions. Within the CMS interface users are able to set if the video is available or not, and to add lists of restrictions based on
- IP Address: 127.0.0.1
- Referrer: http://127.0.0.1/myflash.swf (this is unlikely to be needed if using the University of Lincoln player)
- Page URL: http://127.0.0.1/mypage.html
These were added to allow content owners to restrict video playback to certain pages, for example Blackboard sites or portal.
Note for future, the Main.asc uses Actionscript 2, not Actionscript 3. Took a while to figure out why things weren’t working!
CMS and Database
Briefly cover this section, the database itself used to hold data on the videos and access restrictions is MySQL.
The CMS is a custom interface developed in PHP. Employees log in using their usual username and password which is then validated against Active Directory. As mentioned earlier, on first log in the service creates an application folder for the user based on their employee id. This directory will hold all future videos uploaded.
The CMS interface allows users to upload videos, resize, make the videos available/unavailable, set access restrictions, preview videos, and provides the code to copy and paste into a page to display the video.
Encrypting videos
Flash Media Interactive streams either MP4 or FLV video formats. Users can upload in a number of different file formats which are then re-encoded to the size requested and into MP4. The application used to do this is Handbrake Command Line.
When a user uploads a video a process is initiated and then forgotten about as far as the php file is concerned. Otherwise for large files the page would eventually time out. As such there is no way of providing a progress bar to users, or a way of knowing if the file has completed encoding.
The bit rate was set to be 250kbps, which is in line with other online streaming sites, but it would be possible to provide the ability for users to determine the quality of encoding being used for higher bandwidth applications.
Update 20/08/2010
After the Media Server had been released some users reported having problems with some files not working. On investigating this issue it seemed some WMV files were not being converted by Handbrake. These were files created via cameras.
The process was tried on a separate machine with the same result, Handbrake would just return an error message and the process would terminate. Other file converters tried also had the same problem, such as Megui and Any Video Converter. After further investigation on the internet it seemed other users were having similar problems, related to the latest codec of WMV, WMV9.
A converter was recommended by Mark Aldridge, called Super. This was able to successfully convert the files to MP4 format. As this application was based on FFMPEG it was decided to try the latest version of FFMPEG from the command line which was also successful. A benefit of this was the fact that FFMPEG also allows creation of jpeg images from a video file which allowed a far better solution to showing a preview/thumbnail image (discussed later).
Player Interface
Ok, now the fun part, the player interface!
It was rather daunting in the development process as everything was shuffling along, each step new ideas and ways to pull things together were arising and being found. And having never used Flash Professional before it was very much a learning experience on how to use the stage, objects, symbols and Actionscript 3. Many problems were encountered, lots of head scratching, work arounds implemented.
In this section I will talk about
- Stage
- Layout
- Connecting
- Variables
- Thumbnails
- Video Scrub
- Sound
- Accessibility
Stage
The Stage is the canvas on which you place objects and symbols in Flash. The size of the stage is normally set when you create the flash animation in the application. The problem was that we were allowing people to resize videos so we had to be able to accommodate this.
This meant that not only did the stage have to change size, but also that the player interface around it would need to alter.
This is achieved by capturing the metaData of the video object once a connection has been established with the server.
That sounds easy, but the method of connecting and handling the process is quite complicated. I’ll try and cover this here, but will also be covered under the Connection section.
var customClient:Object = new Object();
customClient.onMetaData = metaDataHandler;
The above code creates an object called customClient, and then assigns an event handler which calls the function MetaDataHandler when meta data is received.
The customClient object needs to be associated with a NetStream object.
One a NetConnection has been established an event listener on the NetConnection calls a function which checks when the NetStatusEvent code is NetConnection.Connect.Success. A NetStream object is created and the customClient assigned to it
ns = new NetStream(nc);
ns.client = customClient;
When meta data is received we can then access information on the video
function metaDataHandler(infoObject:Object):void
{
myDuration=infoObject.duration;
myVideoHeight= new Number(infoObject.height);
myVideoWidth = new Number(infoObject.width);
}
Once we have the video size we can layout the page. Before I get onto layout, an issue encountered was that the stage wouldn’t center in the object space specified in the html. After much searching through the internet and trying many (many) different solutions it finally turned out the stage alignment needed setting to top left.
stage.align="TL";
Layout
Working out how to use the stage and add items was one of the more time consuming parts of the project. It was decided not to use existing open source players due to needing to add handling for various video sizes and the security we would be implementing.
The stage and layout would all be created in Actionscript, not by dragging and dropping from the library.
To use images from the library it is possible to import the item as an image object and then contain it within a symbol, but it was far easier due to the fact a symbol object was created automatically when an image was imported to the library. It was then a case of giving the symbol an instance (class) name, and under the properties for the symbol set the Export to Actionscript checkbox so it was available as a class within the code.
Instances of the class can then be created in the code
var myvolSliderPink:volSliderPink = new volSliderPink();
When adding objects to the stage you can either add them directly using the addChild or addChildAt methods, or you can add child objects to other images or objects. An example of adding child objects is the volume control, the button and pink/white lines are all child objects of the background object.
myvolSliderBG.width=22;
myvolSliderBG.addChild(myvolSliderGrey);
myvolSliderBG.addChild(myvolSliderPink);
myvolSliderBG.addChild(myvolSliderWhite);
As the video object itself can be any size, the layout of the buttons and controls has to dynamically alter depending on the video. A function in Actionscript lays out the page, specifying X and Y coordinates on the stage, the layout of the items is based on the spacing from the edges and top of the navigation area. The image below shows some of the layout distances.
Connecting
The process of connecting and playing a video stream involves a number of steps.
First you create a NetConnection object.
var nc:NetConnection = new NetConnection();
Then add an event listener to call a function each time the connection status changes.
nc.addEventListener(NetStatusEvent.NET_STATUS, myStatus);
To connect to the server we use the NetConnection.connet method.
var connectURL:String = new String();
connectURL = "rtmpe://myplayer.lincoln.ac.uk/"+myUser;
nc.connect(connectURL,myVideoPk);
The connectURL is the directory of the employee application directory on the server. The myUser and myVideoPk variables are passed to the SWF file by the html in the page containing the SWF player. These are discussed later.
The event listener on the NetConnection calls the function specified earlier. The function captures the event as an event object, and then the event object has a code which is used by an IF statement to check for a successful connection.
function myStatus(event:NetStatusEvent):void
{
var info:Object = event.info;
if(info.code=="NetConnection.Connect.Success")
{
The Succes or Rejected responses are returned by the Main.asc file in each user directory. If the video is not available or the current user/video does not meet the access restrictions set then the user is shown a message advising this.
For a successful connection we can then create a NetStream object based on the NetConnection.
var ns:NetStream;
ns = new NetStream(nc);
ns.client = customClient;
The customClient is the object discussed under the Stage section, which has an event listener set for when it receives MetaData. Assigning this to the netStream means that when Metadata is received it will initiate a call to the function metaDataHandler.
customClient.onMetaData = metaDataHandler;
The final step is to initiate playback of a video using the netStream.ply method
ns.play("mp4:"+myVideoFile, myThumbnail);
The myVideoFile and myThumbnail variables are both passed to the SWF by the containing HTML document. The thumbnail solution was quite complex and encountered many problems which is discussed in a later section of this page.
Variables
From the start it was clear we would have to pass some variables across to the SWF externally to identify which video was to be used, and during development additional values were required and added to the list to include:
- userid – the employee id of the video owner
- videoFile – the name of the video file
- videopk – unique identifier of the video
- thumbnail – Integer value to determine the time of the video to use as a thumbnail (in seconds)
When embedding the video into a HTML page it contains two main parts, an object with multiple parameters for Internet Explorer, and within the object an embed tag for use with Firefox browsers.
An example of the code to insert the flash player into a page is shown below.
These values passed to the SWF are accessed using the flash.display.LoaderInfo class which needs importing at the start of the actionscript.
import flash.display.LoaderInfo;
You then access each value using the LoaderInfo class and the name of each value.
LoaderInfo(this.root.loaderInfo).parameters.userid;
Checks were put in place to make sure values had been entered, and if any are missing a message is shown to the user.
Thumbnails
This addition was quite late in the development process, it came as a request from one of the initial users testing the service. They wanted to be able to show a preview/image of the video rather than just show a black box.
One possible solution was to allow an image to be uploaded and used as the thumbnail. This would require the user to be able to take a screenshot of the video, crop the image and resize it.
Some applications for encoding can create thumbnails during the encoding process, but unfortunately Handbrake did not support this.
The final solution was to look at pausing the video itself at a time point decided by the user. This is the one that was chosen for it’s ease of use but became the cause of significant issues to get a working version.
The NetStream.play method can accept a number of parameters which should have made this a very simple process. NetStream documentation
- name:Object
- start:Number
- len:Number
- reset:Object
The Start and Len values are the ones that would be used, the start to determine the time in seconds of the video to use a still frame, and Len to set as 0 to play a single frame as explained in teh class documentation: “If 0, plays a single frame that is start seconds from the beginning of a recorded stream.”
The NetStream play call would look like this…
ns.play("mp4:"+myVideoFile, myThumbnail, 0, true);
Attempting this in practice never resulted in a thumbnail being shown. Setting the ‘len’ value to 0.1 instead of 0 made the video show, but it was not successful every time. The ratio of thumbnails shown to not shown was about equal using this method.
Many, many, many, many attempts and variations, fudges, workarounds and more were tried until a final version has been accepted even though it is still not 100% each time.
Once a successful NetConnection response has been made, a function ShowThumbnail is called. This disables sound being received on the NetStream object and sets the volume to 0.
var myTransform:SoundTransform = new SoundTransform();
ns.receiveAudio(false);
myTransform = new SoundTransform(0);
ns.soundTransform = myTransform;
This is because otherwise a sound blip could be heard when starting playback of the full video.
Once the sounds has been disabled the thumbnail video is played.
ns.play("mp4:"+myVideoFile, myThumbnail);
No time limit is set on the play duration of this thumbnail in the ‘len’ parameter. This is to ensure video is being played before the stream stops resulting in a black screen still.
To stop the video to show the image from the video, in the metaDataHandler function a variable was set with a timer to call the function videoStatus on a regular basis. In the videoStatus function a boolean check is made to see if the thumbnail has been shown, and if not then pauses the NetStream and sets the video to being visible.
This has proved the most reliable way of presenting a thumbnail image, but as mentioned it is not 100% reliable. Sometimes the frame isn’t shown, other times when the user clicks Play to start the full video it seems the time position had continued.
To start the playback of the full video after the thumbnail, when the user clicks the Play button in the interface:
ns.play("mp4:"+myVideoFile,0,-1,true);
ns.seek(0);
ns.resume();
videoInterval = setInterval(videoStatus, 100);
var myTransform2:SoundTransform = new SoundTransform();
myTransform2 = new SoundTransform(1);
ns.soundTransform = myTransform2;
ns.receiveAudio(true);
We initiate a new NetStream.play call starting from the beginning, call NetStream.seek to try an ensure it is the beginning, set the polling function again, and reinitialise the sound both by putting volume back up to 1 and receiving audio over the NetStream object.
This isn’t ideal as it doesn’t always work, but the documented method for showing a single frame didn’t work.
Update 20/08/2010
Due to the change from using Handbrake CLI to using the latest version of FFMPEG I was able to implement a new method of thumbnail creation. FFMPEG allows the generation of individual jpeg images from a video, which would be far more reliable than the workaround method using the netstream to try and show a paused stream.
An example command line for FFMPEG is:ffmpeg -itsoffset -116 -i ..\wmv9test.wmv -vcodec mjpeg -vframes 1 -an -f rawvideo -y -s 480x270 ..\wmv9testthumb.jpg
A new function was created in the video class file to create the thumbnail images by initiating the command line with variables from the video. This function is called by the encode process to generate the thumbnail when the video is created, and is also called by a new feature users can initiate on the Edit Video page after entering a new time in seconds for the thumbnail to be created from the video.
The flash document had to be updated to load the image, using the Loader class, and two event listeners to initiate the next steps once the image had either been loaded or an IO error occurred if no thumbnail existed.
If no image file was found then it just defaults to showing the black background until the user starts playing the video.
Video Scrub
The ability to move to any point in a video is a feature of the streaming server, and to allow the user to do this they have the scrub bar. The user can either click and drag the button to a point in the video, with the video updating while they move, or they can click anywhere on the entire length of the timeline to jump straight to that point.
A transparent symbol the same size of the timeline is put above it on the stage, and has an event listener set to call a function when the user clicks along it.
myGreySeekArea.addEventListener(MouseEvent.CLICK, mySeek);
The function takes the position of the click and works out how far into the timeline the user wishes to go, and then sets the NetStream.seek method to move to that position in the video.
function mySeek(event:MouseEvent):void
{
//called when user clicks on a position within the grey scrub line
var myNewPos:int = Math.floor((myDuration/myGreySeekArea.width)*(stage.mouseX-myGreySeekArea.x));
ns.seek(myNewPos);
}
The scrub ability to drag the button along the timeline is more complex. It consists of three items
var myGreySlider:greySlider = new greySlider();
var myPinkSlider:pinkSlider = new pinkSlider();
var myWhiteSlider:whiteSlider = new whiteSlider();
The greySlider is the unplayed part of the timeline, the pinkSlider is the played section, and the whiteSlider is the button that moves along the timeline as the video plays and can be dragged by the user.
The whiteSlider object has two event listeners set to it to call functions when a user clicks and releases the mouse button.
myWhiteSlider.addEventListener(MouseEvent.MOUSE_DOWN, myScrub);
myWhiteSlider.addEventListener(MouseEvent.MOUSE_UP, myScrubRelease);
The myScrub function is shown below:
function myScrub(event:MouseEvent):void
{
//stop videoStatus function polling
clearInterval(videoInterval);
//set boundaries for the scub icon
var myRectangle:Rectangle = new Rectangle(myGreySeekArea.x, myGreySeekArea.y+1, (myGreySeekArea.width-myWhiteSlider.width), 0);
//set a mouseevent listener to catch the moving of the icon, call scrubit function
myWhiteSlider.addEventListener(MouseEvent.MOUSE_MOVE,scrubit);
myGreySeekArea.addEventListener(MouseEvent.MOUSE_OUT,myScrubRelease);
//catch all for mouse release anywhere on stage
stage.addEventListener(MouseEvent.MOUSE_UP,myScrubRelease);
//start drag of the sprite object
myWhiteSlider.startDrag(false,myRectangle);
}
A lot happens here. The function stops the timed function call videoInterval, and sets three new event listeners. The event listener assigned to the stage object was required to make sure that when the user released the mouse button that the scrub was stopped. Without it there was the chance the cursor moved outside of the rectangle area and the mouse up not recognised and the scrub stopped.
The event listener set to the whiteSlider is to call the scrubit function while the button is moved by the user.
The startDrag method is the one that starts the movement for the object and declares the area of movement possible using the Rectangle object.
The scrubit function called while the user moves the whiteSlider is shown below:
function scrubit(event:MouseEvent):void
{
//calculation is based on the x pos of the slider on the seek area
var myNewPos:int = Math.floor((myDuration/myGreySeekArea.width)*(myWhiteSlider.x - myGreySeekArea.x));
ns.seek(myNewPos);
updateTimer();
//update pink bar to indicate watched section
amountLoaded = ns.time / myDuration;
amountLoaded = amountLoaded * myGreySlider.width;
myPinkSlider.width=amountLoaded+10;
}
This calculates the position of the whiteSlider and calls the NetStream.seek method to update the video to the point in the video, updates the timer, and adjusts the pink played indicator on the timeline.
The event listeners catching the mouse up and mouse out events call the following function:
function myScrubRelease(event:MouseEvent):void
{
myWhiteSlider.stopDrag();
//remove the MOVE event listener
myWhiteSlider.removeEventListener(MouseEvent.MOUSE_MOVE,scrubit);
myGreySeekArea.removeEventListener(MouseEvent.MOUSE_OUT,myScrubRelease);
stage.removeEventListener(MouseEvent.MOUSE_UP,myScrubRelease);
//start the videoStatus function polling
videoInterval = setInterval(videoStatus, 100);
}
The function stops the drag, and removes event listeners that had been created in the myScrub function. It also recommences the timed function call videoInterval.
Sound
There are two main sounds controls in the interface, the mute button and the volume control.
The mute button is two symbols, one for on and one for off status. Both have event listeners for mouse clicks and the function called is:
function myMuteButtons(event:MouseEvent):void
{
var myTransform:SoundTransform = new SoundTransform();
if(muteStatus)
{
muteStatus=false;
myVolButton_white.visible=true;
myVolButton_white_mute.visible=false;
myTransform = new SoundTransform(globalVolume,0);
ns.soundTransform = myTransform;
}
else
{
muteStatus=true;
myVolButton_white.visible=false;
myVolButton_white_mute.visible=true;
myTransform = new SoundTransform(0,0);
ns.soundTransform = myTransform;
}
}
Volume isn’t changed in the NetStream object itself, you have to use a SoundTransform object, set the level, and then assign this to the NetStream object. The volume settings go from 0 to 1, in increments of .1
The sound volume works in a similar way to the timeline scrubber, allowing the user to move the white slider up and down to alter the volume level.
An event listener is on the white slider button to call the mySoundScrub function.
function mySoundScrub(event:MouseEvent):void
{
//set boundaries for the scub icon
var mySoundRectangle:Rectangle = new Rectangle((myvolSliderGrey.x-5), myvolSliderGrey.y, 0, myvolSliderGrey.height-myvolSliderWhite.height);
//set a mouseevent listener to catch the moving of the icon, call soundscrubit function
myvolSliderWhite.addEventListener(MouseEvent.MOUSE_UP,mySoundScrubRelease);
myvolSliderWhite.addEventListener(MouseEvent.MOUSE_OUT,mySoundScrubRelease);
myvolSliderWhite.addEventListener(MouseEvent.MOUSE_MOVE,mySoundScrubIt);
//start drag of the sprite object
myvolSliderWhite.startDrag(false,mySoundRectangle);
}
While the user moves the volume control up and down it calls the mySoundScrubIt function:
function mySoundScrubIt(event:MouseEvent):void
{
var mySlideHeight:Number = Math.floor(myvolSliderGrey.height-myvolSliderWhite.height);
var myCurrentY:Number = Math.floor(myvolSliderWhite.y-myvolSliderGrey.y);
globalVolume=(Math.floor((10-(10/mySlideHeight)*myCurrentY))/10);
var myTransform:SoundTransform = new SoundTransform(globalVolume,0);
//check if muted
if(!muteStatus)
{
ns.soundTransform = myTransform;
}
//update pink indicator
myvolSliderPink.y=myvolSliderWhite.y;
myvolSliderPink.height=(myvolSliderGrey.height+myvolSliderGrey.y)-myvolSliderWhite.y;
}
The function works out the current position of the white slider button on the drag line and then resolves this to a value between 0 and 1 and then, if not muted, assigns a new SoundTransform object to the NetStream.
When you user releases the mouse button the mySoundScrubrelease function is called:
function mySoundScrubRelease(event:MouseEvent):void
{
myvolSliderWhite.stopDrag();
myvolSliderWhite.removeEventListener(MouseEvent.MOUSE_UP,mySoundScrubRelease);
myvolSliderWhite.removeEventListener(MouseEvent.MOUSE_OUT,mySoundScrubRelease);
}
This removes the event listeners and stops the drag of the white slider.
Another feature added is the fading out of the volume slider. This makes the screen appear less cluttered, and only appears when the mouse cursor hovers over the area of the slider or the mute button.
The background of the volume slider has two even listeners set to it
myvolSliderBG.addEventListener(MouseEvent.MOUSE_OVER, showSoundScrubMute);
myvolSliderBG.addEventListener(MouseEvent.MOUSE_OUT, hideSoundScrub);
To make sure the slider appears when the cursor is over the mute buttons then the same event listeners were also added to the white mute symbol:
myVolButton_white.addEventListener(MouseEvent.MOUSE_OUT, hideSoundScrub);
myVolButton_white_mute.addEventListener(MouseEvent.MOUSE_OUT, hideSoundScrub);
myVolButton_white.addEventListener(MouseEvent.MOUSE_OVER, showSoundScrubMute);
myVolButton_white_mute.addEventListener(MouseEvent.MOUSE_OVER, showSoundScrubMute);
It required Mouse Out events also as a without it, when the cursor moved from the Slider background to the mute button it was not caught as a Mouse Out event of the slider bg, leaving the volume control visible.
The functions to fade the volume control in and out are below:
A Tween is used to change the Alpha value of the parent object of the slider control between 0 and 1. As the pink, grey and white slider objects are all children of the parent BG symbol then the alpha changed applies to all of them.
Accessibility
The main problem with flash content is making it available to users who are using screen readers. Sometimes they may be missed altogether, or readers can become stuck within them. To try and help there were a number of things that have been implemented.
Tab index values have been set on the main controls play, pause, rewind and mute.
myPlayButton.tabIndex=3;
Other settings for symbols used can be set using the AccessibilityProperties class. You create an instance of the class, set values, and then assign this to a symbol/object instance. For example on the playbutton:
var playAccessProps:AccessibilityProperties = new AccessibilityProperties();
playAccessProps.name = "Rewind";
myPlayButton.accessibilityProperties = playAccessProps;
For objects you want to hide from readers you can set the AccessibilityProperties.silent value as true.
var hideAccessProps:AccessibilityProperties = new AccessibilityProperties();
hideAccessProps.silent=true;
myvolSliderBG.accessibilityProperties = hideAccessProps;
Other tips
Fonts
To format or add a style to a text object you can’t do it directly to the TextField item, you have to create a TextFormat object and then set the textField.defaultTextFormat to the TextFormat object.
var myTimer:TextField = new TextField();
myTimer.multiline=false;
myTimer.text="0:00:00 / 0:00:00";
myTimer.selectable = false;
myTimer.border = false;
myTimer.height=15;
myTimer.width=120;
//timer format font
var format:TextFormat = new TextFormat();
format.font="Verdana";
format.color = "0x999999";
format.size = "10";
myTimer.defaultTextFormat = format;
[…] Adobe Flash […]