I have some great news. Bug 833382 and 833386, the last pieces needed to get initial support for WEBVTT into Gecko will be landing for sure next week. That being said, I hope I don’t have to eat my words. It’s looking really good though. Bug 833382 has gotten to the point where Boris (:bz) has conditionally r+’ed it and 833386 has reached the same point with Ralph (:rillian). Now all that’s left to do for 833382 is to spin up some try builds and if it’s all green, hopefully we’ll be good to go. 833386 still needs to go through review with Chris Pearce (:cpearce), but I don’t think that will take that long.
I’ve been pushing really hard on landing these two this last week and I’m ecstatic to see that we’ve gotten to the point where we will be landing them in the next few days. The WebVTTLoadListener in particular I’m very happy about as most of that code is mine and I’ve worked really hard at it. It’ll feel good to land that. In the case of 833386 most of that code is Jordan Raffoul’s (:jbraffoul) and Marcus Nsaad’s (:msaad) and the work I did on it was just to consolidate it and get it through the last couple rounds of review. Marcus had been on vacation for a while and we really wanted to get this landed ASAP as it is blocking quite a few things so I asked Marcus if I could step in and he didn’t have a problem with it (Yay Marcus!).
I finally figured out, when asking the right people (bz and Ms2ger, who woulda’ thunk?), that it’s actually impossible to test with two different prefs in one page as there is one prototype for each element on the page and the pref is only applied to it once, when the element is created for the first time. So once you’ve created an element underneath one pref on a page, that’s it, it will behave like it is preffed that way no matter what you do. After that it was pretty quick to get through the rest of the code needed. One of the really good things about this process too is that it allowed me to find a lot of points where our current implementation is not to spec. I’ve filed a few bugs on those this week. I’ve also closed a few bugs that have been fixed with changes in the last while. I also spun off another bug for tests that we will need to implement when the WEBVTT pref finally gets removed.
Try Server Access
The other really awesome news is that I’ve finally got try server access! Earlier this week Daniel Holbert (:dholbert) suggessted that I should apply for it and that he would vouch for me. I probably should have applied for it sooner then this as I could have used it for sure. The process was fairly easy and I’m glad to say I now have Level 1 commit access. Woot! If your interested in applying as well check out this page that describes what you will need to do.
In accordance with this new awesomeness I also had to learn about the process of pushing to the try server. Check out this good page for more information on how to do that. You’ll also probably want to look at Mercurial Manage Queue extension as it helps with managing a bunch of patches that you can move in and out of your branches easily. This really works well with my workflow of working on git and then just applying patches on my hg repository and pushing to the try server.
Until next time.
It has been about two weeks now since my class and I set out to start work on the 0.1 release of WebVTT for FireFox. We are now nearing our deadline and are in the final stages of peer reviewing each others work. You can check out that action on our main GitHub repository.
During the development of this 0.1 release I learnt a lot of things:
- How to dual boot a linux install properly
- Continued to get better at using GIT
- Learnt the basics of Python
- Learnt about Makefiles and how ridiculously confusing they are
As we were developing the test suite for WebVTT, which will be the bulk of this 0.1 release, we had to address many different questions about the structure and standards we would be following:
- What would the naming convention be for our test files?
- What would the content of our test files look like?
- How would our test harness function?
- What would the Makefile need to build?
- How would we keep the integrity of line endings in our test files as we would need to be testing LF, CR, and CRLF?
We eventually answered all of these questions leaving us with a pretty robust and clean test suite:
- Our Professor made a test harness written in Python that would take all the tests that we wrote and feed them through the Node.js WebVTT paraser module. This would give us a sanity check to confirm that the tests we write are good before we write our custom WebVTT parser in C++ for FireFox.
- We added in a .gitattributes file that specifies not to convert line endings on our test files:
./test/spec/good/*.test -text ./test/spec/bad/*.test -text ./test/spec/known-good/*.test -text ./test/spec/known-bad/*.test -text
- We decided to document all our tests on our wiki and standardize the naming of our tests using the format of tc###-short_information_block_here.test. You can see our wiki page on the naming convention here. We also decided to create a custom .test file format that would contain two parts, a comment section at the top and a WebVTT section at the bottom. Here is one of the .test files that I wrote:
/* This tests to make sure that a Cue Component class can be resolved with the [cue component].[subclass] notation. This test should pass. Based on the WebVTT cue components specification as of October 3, 2012. http://dev.w3.org/html5/webvtt/#webvtt-cue-span-start-tag */ WEBVTT 00:11.000 --> 00:13.000 <u.class.subclass>Hey this is a test!
We decided on this format as it allowed us to keep the metadata right with the test. Putting it directly in the test file will make it easier to work with in the future as you won’t have to refer back to another document to find the metadata of the test file.
- Now that we had a custom .test file we needed to parse it before we ran it through the Node module in order to rip out the WebVTT section. In order to address this issue I wrote a custom test file parser in Python and changed the Makefile to run it before running the test harness. We were running this configuration for a while when my Professor told me that rather than the Python script looping through the .test directories and ripping the WebVTT, the Makefile should determine what tests files need to be ripped and call the Python script for each individual test file. In accordance with this I spent a lot of time working with our Makefile trying to figure out how to get it to run the way that we wanted it too. Through this I learnt a lot about Makefiles (I will do a blog post later to talk about this in detail) and after much struggle, and with the help of one of my class mates as well as my Professor, we got it working. In order to implement this correctly we had to add a few lines to the Makefile:
SRC_DIR = . TEST_DIR = $(SRC_DIR)/test SPEC_DIR = $(TEST_DIR)/spec OBJ_DIR = $(SRC_DIR)/objdir OBJ_DIR_SPEC = $(OBJ_DIR)/test/spec # Get all the .test files underneath the directory specified by $(SPEC_DIR) TEST_SRC := $(shell find $(SPEC_DIR) -name '*.test' -print) # Transform all .test files rooted in ./test to .vtt rooted in # .objdir/test VTT_SRC := $(subst $(SRC_DIR)/test,$(OBJ_DIR)/test,$(subst .test,.vtt,$(TEST_SRC))) STIP_VTT = $(SPEC_DIR)/strip-vtt.py objdir: mkdir $(OBJ_DIR) check-js: objdir $(VTT_SRC) $(PYTHON) ./test/spec/run-tests-js.py $(OBJ_DIR_SPEC) $(OBJ_DIR)/%.vtt : $(SRC_DIR)/%.test @$(PYTHON) $(STIP_VTT) $< $@
Now when we run the command ‘make check-js’ (check-js denotes that we want the Node.js WebVTT module to run) the Makefile will make the object directory, where the ripped .vtt files will live, call the script to rip the WebVTT out of the test files that have changed since the last build, and then run the test harness.This is much cleaner than the first solution where the Python script to rip the WebVTT just ripped every .test file each time it ran without checking if it actually needed to rip it.
Hopefully, we will be able to get through our peer reviews soon and get started on our next release, where I hope we will begin working on the actual parser and hook for FireFox.
So, in my open source class this semester we have been working heavily with GIT and GIT Hub in order to organize and version control our work on the WebVTT implementation in Firefox. Thankfully, I learnt the basics of GIT before I started this class. If I had to be dealing with GIT and all the other stuff that I have been learning , like Makefiles (argh), this class would have been extra hard.
Up until this point I have been proficient with many of GITs basic operations. The GIT operations that you need to know in order to use GIT at all.
These are the most basic of GIT commands. And I do pretty basic things with them. Up until this class I have not been able/desired to drive down into the most advanced of GITs capabilities, but now this class is giving me an opportunity to delve a little deeper.
Some things that I learnt that I didn’t even know were possible are:
- tracking a remote branch which allows you to automatically track updates in the remote branch as well as giving you the ability to do a push if you have r+w permissions
- using git show to see the most recent changes between anything – branches, commits, etc
The big chunk of what I have learnt in this class regarding GIT is how to properly use GIT in your workflow.
- doing small commits instead of big ones so that each change can be rolled back too and tracked easily
- checking out branches in order to test things out such as testing out whether or not your current commit will merge easily with another commit
One of my friends in class, Jordan Raffoul, who also works with GIT at his job sent me a link to this really insightful article on what a successful git branch model should look like. The article blew me away. It describes a git branching model that accounts for all the work flows in a modern day software development life cycle. From development to production. With capabilities of including hot fixes, bug patches, and feature developments. It’s pretty impressive. I know what GIT model I’m recommending next time I start a new project!
The main thing that all this made me realize is that I haven’t even scratched the tip of the ice berg with GIT. Even while searching for the links to the GIT web page to post in this blog I saw tons of commands I had never heard of. It makes me wonder how powerful GIT truly is.