Tagged: software

WEBVTT: All the Bugs.

So I’m happy to finally say that Nightly now has initial support for WebVTT! Woot!! To check it out follow these steps:

  1. Download Nightly.
  2. Turn the WebVTT pref on by typing ‘about:config’ into your URL bar, search for webvtt, and double clicking on the result to flip it to true.
  3. Browse to a webpage with WebVTT like this page.

Do all these things and you should see subtitles! It’s been about six months of work on and off now in school and afterwards to get this initial support working on Firefox so it’s pretty amazing to see it working.

What’s Left to Do — Everything

The thing that we have next on our plate is to fix up our current implementation so that we meet the spec requirements. We have a lot left to do to bridge that gap… In regards to this I spent Thursday last week going through the spec, figuring out the outline of what we may need to do to get our implementation up to date, and logged bugs about it. I logged a ton of them and there are still more hiding in the spec, so there really is a lot of work left to do. Hopefully, this will give us more clarity around what needs/may need to happen in implementing the spec. If it turns out we don’t need/want to implement something internally that the spec recommends we should then these bugs will at least serve as a place to discuss and document those decisions.

Just last Friday we had one of these particular incidents. The spec recommends that we use an ‘active flag’ to keep track of which cues are currently active, or showing, on the screen. The problem with this is that it would require us to iterate through the entire cue list and check which cues have the active flag set every time we want to update the rendering of the captions. This isn’t very efficient and it’s an internal detail to the spec i.e. it would never be exposed to the user so we have some leeway to implement this in ways that make sense for Gecko. What we decided on is that we will keep the main cue list of a text track sorted which will make it very easy for us to keep track of the active cues based on the current play back time. I’ve done some work for that here.

Testing

Unfortunately, the main bug for tests along with Caitlin’s new tests for making sure that we don’t display cues that have malformed time stamps still haven’t landed. We’ve had problem after problem with these. The newest of them being that they don’t pass on android for some reason. I suspect the track data isn’t being loaded properly so we keep waiting in the test for this to happen and it eventually times out.

The two problems we had before that have finally been solved.The first of them was that we were using the ‘loadedmetadata’ event to determine when the cues were loaded for a VideoElement. The problem with this was that we don’t implement the necessary steps to inform the VideoElement when the cues are ready, so ‘loadedmetadata’ is dispatched before they are actually ready. To solve this Catilin implemented the HTMLTrackElement::ReadyState, which is in the spec, but wasn’t in our implementation, and we can hook into that to determine when they are ready. The second problem was that the Query Interface implementation for HTMLTrackElement had a bug in it where we weren’t able to cast it to an nsIDOMHTMLElement. This was fine on opt builds, but for debug builds we were getting assertions that the HTMLTrackElement wasn’t a DOMNode or DOMWindow when we tried to check it’s content policy. I landed a patch that solved that just last Friday.

Hopefully, we can have these tests landed by this week… we shall see though. It depends on how quickly this Android debugging goes for me..

Time Flies

The rest of my time these last few weeks has been taken up with relatively small things that build up on each other, like commenting on bugs, reading the spec (which can take forever as it’s fairly complicated), helping out Marcus, giving feedback on patches, planning to figure out what is the best thing to do next, etc. Hopefully, we can get WebVTT in Firefox up to spec soon, we’ll have to see though.

Getting Involved

If you are interested and wanting to get involved I would really encourage you to come out and become a contributor. There really is a ton of work left to do — for the implementation of the track spec as well as for libwebvtt (the C parser). You can get in touch with the current contributors via a number of ways — Mozilla’s dev media mailing list, the #media IRC channel on irc.mozilla.org, or on the mozilla/webvtt GitHub page. Check out the WebVTT wiki page for more information on how to get involved.

Until next time.

Advertisements

WEBVTT: Initial Support Incoming

I have some great news. Bug 833382 and 833386, the last pieces needed to get initial support for WEBVTT into Gecko will be landing for sure next week. That being said, I hope I don’t have to eat my words. It’s looking really good though. Bug 833382 has gotten to the point where Boris (:bz) has conditionally r+’ed it and 833386 has reached the same point with Ralph (:rillian). Now all that’s left to do for 833382 is to spin up some try builds and if it’s all green, hopefully we’ll be good to go. 833386 still needs to go through review with Chris Pearce (:cpearce), but I don’t think that will take that long.

I’ve been pushing really hard on landing these two this last week and I’m ecstatic to see that we’ve gotten to the point where we will be landing them in the next few days. The WebVTTLoadListener in particular I’m very happy about as most of that code is mine and I’ve worked really hard at it. It’ll feel good to land that. In the case of 833386 most of that code is Jordan Raffoul’s (:jbraffoul) and Marcus Nsaad’s (:msaad) and the work I did on it was just to consolidate it and get it through the last couple rounds of review. Marcus had been on vacation for a while and we really wanted to get this landed ASAP as it is blocking quite a few things so I asked Marcus if I could step in and he didn’t have a problem with it (Yay Marcus!).

Tests, Tests

One of the biggest problems I was having with the tests for the HTMLTrackElement and TextTrack* DOM classes (833386) was that we wanted to test everything with the pref set to true and also with it set to false. The thinking here was that we could do it in the same file. The problem was that it wasn’t working… and we all thought that it should be possible to do so I spent about 2~3 days trying to get it to work. At first I suspected that it was due to closures in Javascript and that the ‘state’ or variables were being sustained somehow by this. There was no way to get out of being entangled in closures too as we need to use a particular function, SpecialPowers.pushPrefEnv, that takes a function argument and async calls it after the pref is set. So in order for us to test with true and then false we have to have one pushPrefEnv(true) and one pushPrefEnv(false) with the second embedded in the first so that they are called one after the other and in that way we have a definitive point where we can call SimpleTest.finish().

I finally figured out, when asking the right people (bz and Ms2ger, who woulda’ thunk?), that it’s actually impossible to test with two different prefs in one page as there is one prototype for each element on the page and the pref is only applied to it once, when the element is created for the first time. So once you’ve created an element underneath one pref on a page, that’s it, it will behave like it is preffed that way no matter what you do. After that it was pretty quick to get through the rest of the code needed. One of the really good things about this process too is that it allowed me to find a lot of points where our current implementation is not to spec. I’ve filed a few bugs on those this week. I’ve also closed a few bugs that have been fixed with changes in the last while. I also spun off another bug for tests that we will need to implement when the WEBVTT pref finally gets removed.

Try Server Access

The other really awesome news is that I’ve finally got try server access! Earlier this week Daniel Holbert (:dholbert) suggessted that I should apply for it and that he would vouch for me. I probably should have applied for it sooner then this as I could have used it for sure. The process was fairly easy and I’m glad to say I now have Level 1 commit access. Woot! If your interested in applying as well check out this page that describes what you will need to do.

In accordance with this new awesomeness I also had to learn about the process of pushing to the try server. Check out this good page for more information on how to do that. You’ll also probably want to look at Mercurial Manage Queue extension as it helps with managing a bunch of patches that you can move in and out of your branches easily. This really works well with my workflow of working on git and then just applying patches on my hg repository and pushing to the try server.

Until next time.

CSS Parsing: Writing-mode

One of the things I’ve been working on since I got back from Taipei has been helping with the implementation of vertical-text in Gecko which WEBVTT needs to support. The small way in which I helped out was to implement the “writing-mode” property in the style system. Basically this is just getting the Gecko CSS parser/scanner to understand the new property and be able to process it correctly. This was fairly easy to implement due to Daniel Holbert (:dholbert) giving me an awesome walkthrough of the things that would need to change in order for this to work and also an example bug where he did a similar thing. So after half a day doing the initial code and a few rounds of review it landed in the tree and we’re now one step closer to vertical text!

If you’d like to learn more about how the style system works you can head over to the style section of the Gecko overview page which also has a lot of really good information on how Gecko works in general. I won’t duplicate information here, but I will talk about a couple of the things I found interesting in the style system.

  • CSS property values are stored in a computed form and there are translation mechanisms to go back and forth between the two.
  • The computed values of CSS properties live in structs, namely nsStyleStructs, and CSS properties that tend to be set together live in the same struct. For example — when a user sets the font-family property they are likely to also set the font-size, font-weight, etc, so storing the computed values of these properties together makes sense.
  • Single instances of nsStyleStructs, i.e. one set of related properties, are shared across many different DOM nodes as DOM nodes are more likely to share one set of CSS properties then they are to have different sets of properties. For example, the vast majority of times that a page sets font properties the entire page will share these properties. This cuts down on memory usage. These structs are immutable and when the same CSS property needs to be set differently on another set of DOM nodes a new struct will be created for the new set of properties.
  • Each nsStyleStruct has a CalcDifference() function that gives hints to Gecko about when it needs to update the rendering of the page based on the CSS properties changing, i.e. if it needs to reconstruct the frame or the text needs to be reflowed, etc.

In the future I’m hoping to help with more layout things. Hopefully, I can do the same kind of bug for the “text-orientation” CSS property, which is also needed for vertical text. I haven’t started as of yet as it’s still kind of up in the air in the spec. Right now I’ve also started work on another easy bug to reorganize the reftests in layout/reftest/form. Hopefully, I can get that done sometime next week. We’ll need reftests for WEBVTT as well so this will be a good bug where I can learn about them.

Web Rendering Week — Recap

So I’m back home now. The flight home was a bit shorter, about two hours, thankfully. Interestingly enough I feel more jet lagged now that I’m home then I was in Taiwan. I’ve heard that going east is worse then going west, so that might be the case for me right now. The rest of the rendering week, after I blogged last Tuesday, was just as awesome as the first part.

How Gecko Does X

I got to sit in on a lot more talks about “How Gecko does X” — like how the graphics engine works, how the layout system works, and my favourite in particular, how cycle collection works. Kyle Huey did an excellent job explaining how the Gecko cycle collector works. He gave us this paper as forward reading about it (it’s a little dense, but definitely worth reading). I’ll try to do a blog post in the future on what I learned/will learn about it.

David Baron and Adam Roach also gave an awesome talk on how the W3C and IETF work. WEBVTT is my first major exposure to open specifications that I’ve had and so I’ve been interested in all the hows/whats/whys of open specifications and the politics behind them.

Initial WEBVTT Support

It wasn’t all fun and games over in Taiwan though. We were also doing a lot of work. We finally got bug 833385 landed near to the end of the week. This means that we have support for all the new DOM elements that WEBVTT introduces such as: HTMLTrackElement, TextTrack, TextTrackList, TextTrackCue, and TextTrackCueList. We ran into a random inexplicable bug when we were doing full tries on the code, just before landing it. Ralph and I went to work debugging it (had to use an ASAN build) and we ended up discovering that it happened in a very rare situation where the cycle collector nulls out the HTMLMediaElement’s TextTrackList member while the HTMLMediaElement is still alive. This results in a situation where HTMLMediaElement::FireTimeUpdate() is called just before it is about to be deleted and since we weren’t doing null checks on the call to TextTrackList::Update() we would crash. After we got that fixed we were all green.

That leaves just bug 833382 left before we get initial support for WEBVTT. It was going well last week and we got an r+ from Chris Pearce. Now we just need an r+ from Boris and we should be good to go. It might take a few more rounds of review before that happens, but I’m optimistic we will be able to get this landed within a week or so.

One of the major problems I was having in Taiwan was trying to get a clean diff for 833382. The problem centered around the fact that up until now, mainly in my previous open-source class, we decided to use a git branch as a main point of ‘integration’. We’ve all been working off this branch for a while. The history of this branch has been so ridiculous and the code necessary for 833382 depends on so many other parts of the code that have been touched by, well everyone, that it was pretty much impossible for me to do a clean diff or even get a good rebase. To rebase this beast I would have had to go through 150+ commits, each having merge conflicts… So I ended up just making a clean branch off master and manually moving all the code that I needed over to the new branch and doing a diff like that. I’ll probably be staying away from these kinds of ‘integration’ branches in the future in order to ensure that my repo history can be more linear. Easier to get diffs that way.

The other thing I’ve been dealing with in the last few days is some code we landed back in February that was spotted to not be up to par by Boris. The issue is with some of the CSS selectors that we are using to style WEBVTT text — namely, we are using slow CSS selectors which is bad. This is the first that I’ve heard about some CSS selectors being slower than others, although that’s not suprising as I’m not super-super knowledgeable about CSS. Mozilla even has a page devoted to this that you can check out here. Ralph and I put together a patch yesterday to deal with this which will land today most likely. I’ll have to update 833382 to reflect those changes today as well.

CSS Parser Hacking

I also sat in on the vertical-text layout meeting as it is of particular interest to WEBVTT. WEBVTT requires the ability to have vertical-text and so far in Gecko we don’t have this. Apparently vertical-text has been kind of a thorn in the side of the layout team for a long time as it’s been particularly hard to implement. However, there is a major push now to get it done, so that’s great. In accordance with this Daniel Holbert asked me if I wanted to do some stuff for vertical-text in the CSS parser, I accepted and got my first layout bug! So I’ll be hacking around in the CSS parser and layout section of Gecko more in the future, hopefully.

WebMaker

Dave also told me the other day that he’s figured out an area of WebMaker that I can start contributing to, so I’m excited about that. I’ll be starting in about two weeks on this. I’ll most likely be splitting my time like 70/30 or something like that for WEBVTT and WebMaker. We talked briefly about it and so my understanding isn’t 100%, but what I got from our talk is that I will be implementing a kind of wrapper around an HTML5 video element that will allow Popcorn Maker to be able to work with it. From my understanding Popcorn Maker works with many different video formats/sources and so it needs a uniform interface to work with all these different videos. That’s where the wrapper comes in. It allows Popcorn Maker to work with many different video formats and sources without worrying about the particulars. However, all this might be completely wrong  as I might have misunderstood some things from our brief conversation… So don’t take my word for it! At any rate I’ll do another blog post about it when I get more information.

Until next time.

Mozilla Taiwan — Web Rendering Week

After a ridiculously long flight, 24 hours in total including airport time, we finally arrived in Taiwan, met up with Daniel (IRC:dholbert) and Seth (IRC:seth) and were on our way to the hotel in a cab. The plane ride was pretty good. Uneventful… just super long.

Taipei 101 building.

Taipei 101 building

It’s now Wednesday and it’s been an awesome and eventful week. I’ve been meeting tons of Mozilla devs and learning a lot. We’ve also been working hard on landing that initial support for WEBVTT I was talking about in my last blog post.

Sunday I was kind of jet lagged so I took a quick power nap and woke up for dinner which we ate at the Tapei 101 building (formerly the tallest building in the world). Dinner was like 15 courses? Pretty full after that.

After sleeping Sunday night I was pretty much jet lag free… I was kind of surprised as I’ve heard it’s really hard to deal with for some people. My first experience with it was kind of a non-issue. It’s probably because I’m used to sleeping late anyways… so really I just corrected my schedule to what’s normal for most people.

This week is pretty much a week where all the Web Rendering people can get together and work and talk face to face. There’s been a lot of meetings, some of which I’ve sat in on, that are super interesting and informative. Robert (IRC:roc) gave an excellent talk on the working culture at Mozilla which has really reiterated to me again how awesome the Mozilla working culture really is and how different it is from other companies. The talk touched on openness at Mozilla, code review, software quality, and module ownership vs. Mozilla’s managerial structure among many other things.

On the way to the office

On the way to the office

Aside from working on initial WEBVTT support I’ve also been getting reacquainted with the specification as it’s changed a lot since I last looked at it, going over all the bugs that we currently have for WEBVTT in order to get some kind of an idea of what we need to do next, and engaging in a lot of discussions about what we think the WEBVTT spec needs to improve on or change. This is great as when I get back to Toronto I’ll have a better plan of what needs to be done and how we are going to do it.

I’m also learning a ton about different areas Gecko that I didn’t know about before just by listening to and talking to others. Overall I’d say I’ve learnt way more then I could ever hope to do in the same amount of time sitting on IRC chatting with people.

One other thing I’m looking forward to is a talk on the cycle collector that Kyle (khuey) is going to be giving tomorrow. The more I try to work with and use cycle collection the more I want to understand it. Hopefully I can walk away from the talk with a better understanding of how it works.

Here are some assorted pictures of the Mozilla Taiwan office. It’s pretty brand spanking new. The office space they have us working in was actually just finished before we got here and they had to push the contractors to get it done early. It’s right across from the Bloomberg office here in Taipei as well as some other cool places. It’s literally 30 seconds from Taipei 101, so it’s right in the downtown core of the city.

Stinks that I won’t have enough time to check out many other places in Taipei as work is taking up most of my time. Taiwan isn’t the place I thought I would visit first when traveling to Asia, or even a place I would visit, but it’s definitely on my list of places to come back to.

Front sign and entrance

Front sign and entrance

Space for gathering and presntations.

Space for gathering and presntations.

Work Space

Work space that Mozilla Taiwan put aside for us.

Front desk.

Front desk.

View outside the office

View outside the office

WEBVTT: Farewell DPS911

Tomorrow is the last day for my open-source class at Seneca. So this will be the last WEBVTT post that I will make for the class, ever. It’s been a long journey since last September and we’ve made huge progress, learnt a ton, burnt out many a time, and had a great time doing it. If you are worried about no more posts on WEBVTT fear not! I’ll still be posting regularly on WEBVTT as I’ve now switched over to working on it and possibly some WebMaker stuff at CDOT for the next year. I’m really looking forward to it.

Now, lets get on with it.


WEBVTT Parser

It’s been pretty exciting around WEBVTT in the last month or so — ever since we did a presentation at Toronto Mozilla we’ve received a lot more interest. It’s a pretty cool and strange feeling to have people interested in what we’re doing. Especially with WEBVTT. It’s not very glamorous, as you can imagine. Myself and a few of my classmates also went to an “Open web open mic” night at Toronto Mozilla where we got to do another presentation and showcase WEBVTT off in a kind of science fair environment. We also got to see lots of great presentations and projects that are being worked on. It really opened my mind to what is going on in Toronto and beyond. Pretty cool stuff.

We recently got all our tests green! At that point we officially tagged a revision of the parser as version 0.4… so lots more work to do. Since then we’ve been adding more refined and atomic unit tests to the test suite. Most of them are testing our internal functions in the library. I’ve been focusing on the cue text tokenize  functions for these. Instead of passing in an entire WEBVTT file, we pass in input that it will be expected to handle and test to make sure it behaves correctly. We’ve also been solving a few of the bugs that have been found via fuzzing WEBVTT, courtesy of cdiehl and rforbes,  in our integration branch. That’s awesome — we’re getting fuzz tested on something that has not even landed in Nightly yet! Caitlin has also started to add the ones we have solved as regression tests.

Other than that not much has happened on the parser lately as we’ve all been crunching through the last assignments and exams of the semester. We’re probably going to be looking where to enhance the library in the next little while. There are some issues up on the repo right now that still need to be taken care of in regards to enhancement. So we’ll probably be tackling those first.

Gecko Integration

The other big thing we’ve been working on still is getting the parser integrated into Gecko. I’ve probably already blogged before about how we have 2 out of the 5 things we need landed in Nightly already. The last three things we need to land to get basic functionality working are the DOM classes, DOM tests, and the “parser management” code.

Moving Code from WebVTTLoadListener

Around the time of the demo it was decided that we should move the code that converts the c parser data structs to DOM classes out of the WebVTTLoadListener and just use the LoadListener for… well, listening. The LoadListener’s job should be to serve as the point of contact between Gecko and the WEBVTT parser. When it receives data it hands it to the parser and when it receives a cue it constructs a TextTrackCue and hands it to Gecko. I recently got around to that here. The TextTrackCue is the place where  the conversion code now lives. We also now lazily load the parsed WEBVTT nodes into HTMLElements when GetCueAsHTML() is called for the first time.

Properly Creating Nodes

We ran into a problem where processing cue text tags like <i>, <u>, <b>, etc, was crashing the browser. This was due to the fact that we weren’t creating the NodeInfo to be passed into the NS_NewHTMLElement() macro properly. We were just passing in HTMLTrackElement’s NodeInfo. This would cause HTMLTrackElement to be deleted when the HTMLElement was removed from the divs child list. The correct way to do this is to get HTMLTrackElement’s NodeInfoManager() and create a new NodeInfo using it.

Removing Children

We were having a bug where we weren’t removing captions from the div properly. Previously we had been looping from zero to max length of the divs children and removing at the current index. Classic for loop. I tried and tried to figure out what was going wrong and after a while I made my way over to #content to get some help. bz and Ms2ger were kind enough to help me. What I learnt from them is that removing children of a node using this method only removes every other node. This is due to the fact that when you remove a node that isn’t at the end of a list, the entire node tree is shifted down. Therefore, when we remove node at 0 node at 1 becomes node at 0, we then advance to 1 and remove node at 1 missing the node that was shifted! The first solution we thought of was to loop until length is 0 always removing at 0. However, we ended up using another solution that I would never have guessed. That is to instead call nsContentUtils::SetNodeTextContent(). This removes the tree for you and  puts in its place a TextNode. For our solution we just pass in an EmptyString() for the text.

nsINode > nsIDOMNode

The other thing they asked me to do was to change how we were appending nodes to the tree. Instead of using nsIDOMNode interface, this is a slower and more inefficient interface, we should use nsINode. Which has basically the same capabilities. We can do the exact some thing with nsINode in simpler code.

Patches

I submitted a patch tonight that has the most up to date code in it in regards to “WEBVTT parser management” in Gecko. I was hoping we could get this landed quickly, but the events of today have brought up even more work to do. First of all, the patch for DOM classes that we thought would get through pretty quickly has a lot of problems with it, and secondly, the cue text tag class to css selector mapping in Gecko is not at all as simple as I suspected it to be.

I found this out today when trying to get the CSS selectors working on the HTMLElements created from cue text tags. I had all the Gecko code working correctly, and yet the CSS selectors in my external CSS file were not affecting the captions. I went over to #content where bz and Ms2ger informed me that it was because we are constructing them as anonymous content. In other words, no external code can touch the HTMLElements we are creating, only internal code can. This wasn’t the behaviour that I thought was needed and after some discussion #whatwg’s zcorpan informed us that they need to live in a video::cue pseudo-element as part of a sub-document. So in your external CSS selectors you would put video::cue([tagname].[classname]) { }. However, bz said that in order to get a new pseudo-element we would need to do some ‘re-architecting’ of Gecko code. This immediately made me feel nauseous… just kidding, kind of.

In light of this our new goal is to get our current semi-working code into Gecko behind a pref and than iterate on it. Things will be a lot easier when we get the first code landed.


That’s about it as far as I can remember. We’ve done a lot of more little things since than as well. Head over to Mozilla’s WEBVTT repo on github to check out all the changes. And feel free to get on irc.mozilla.org #seneca to co-ordinate with us if you want to help!

Until next time.

WEBVTT: Unit tests, unit tests…

I haven’t written in a quite a while so this is going to be a pretty long post.

TLDR: I made some progress on the Gecko code for the WEBVTT parser integration, but was unable to finish because I was moved over to finishing the WEBVTT parser exclusively as we needed to move fast on it. This resulted in unit test fixing galore.

Gecko Integration

I was working on the Gecko integration stuffyo before I got pulled away and made some decent progress on that. I got some feedback on my first WIP and I addressed most of the issues that the reviewers pointed out. A lot of it was minor nits like code style or incorrect code that I just needed to remove. However, there were a few big things:

  • Getting the OnDataAvailable() function to use the ReadSegments() function of the stream it is passed. To do this you need to create your own function that conforms to the function prototype of nsWriteSegmentFunc. What that does is it allows the nsIInputStream to read in your buffer for you and then you can just hand that buffer over to whatever you need to. This prevents memory leaks as it creates a central location where the memory leak would happen, and presumably, the code in the tree is already written for you and probably better then what you’ve written — so just use it.
  • Use an nsAutoRefTrait to manage the lifetime of the nsAutoRef<webvtt_parser>. That ended up not being so hard. What an nsAutoRefTrait of xType does is define what an nsAutoRef of xType needs to do during it’s lifetime. In our code, for example, all we need to do is define a Release() function that tells the nsAutoRef pointer what to call when it is being released. There are a few more behaviours that you can define like what do do on setup, etc. This is necessary since most things in the Gecko code are all smart pointers and objects, so you don’t want to manage their lifetimes explicitly, but you do want to provide a way to define the appropriate steps it needs to do during its lifetime. The awesome thing about this is that once you create an nsAutoRef of xType the nsAutoRefTrait of xType will be automatically linked to it.

The other couple of things I did before I submitted my last WIP for feeback was fleshing out a lot of the code that will convert the c structs from the c parser to DOM elements, getting the code that deals with DocumentFragments working correctly, and a few other minor things. You can check out the full WIP that I submitted for my last review here.

Driving Issues

The other thing that I was really trying to do up until this point, and currently, is drive the issue list. I’ve noticed that the issue list tends to stagnate if no one is there monitoring it. To this point I have been looking over the issue list, about once every other day, and closing issues that need to be closed, i.e. onces that have been resolved, seem to be going nowhere, or seem to have gone as far as they are able to, as well as trying to push issues that seem to have stalled by @ mentioning the parties involved, or asking a question about where we should head in light of the issue. This process has been very helpful because it has enabled us to keep our issue list from being cluttered by issues that are no longer needed and to also drive issues that people might have forgotten about.

Parser Unit Test Debugging

About three weeks into working on the Gecko integration as well as the parser  I was asked to step back from the Gecko integration for a while and to start working exclusively on getting the parser to pass all the unit tests. To that end, I relinquished my duties on the Gecko integration for a bit. My first job was to help Caitlin get her massive patch split up and landed. To do this she asked me to write a finish parsing function that would allow the patch to be split up more easily. After that was done I started to work on the payload unit tests, as I anticipated Caitlin’s cue code to be landing soon. I was able to get around 90 % of the payload unit tests working over reading week, valgrind and all, and solve many bugs in the parser. A lot of the bugs that were effecting the payload tests were also effecting the cue tests, so I was able to hit two birds with one stone in many cases.

The Importance of Passing Unit Tests

I came across a situation in this sprint where I re-discovered the importance of passing unit tests. How it all started was myself fixing a problem we were having with a function that we use to attach a parsed cue text node to another parent node. The function kept segfaulting and it was pretty easy to figure out what had been happening — it was dereferencing a variable before allocating memory for it. To fix this I allocated memory before hand, as well as rearranged the function a bit to simplify it and make it look cleaner. The code got landed and all was good, or so I thought. Later I uncovered that some of the valgrind errors we were getting in another unit tests were actually caused by what I had landed earlier. It was an easy fix, but it just goes to show how important passing unit tests are. It’s kind of impossible to tell if the code you write breaks anything when everything is broken to begin with.

The Art of Face Palming

As always this last week’s sprint had its fair share of face palming. Over the years I’ve gotten pretty good at it — you might even call me a face palm artist. This weeks face palm that takes first prize was an error we were getting in the grow function for strings where we were using the ‘==’ operator when we really wanted the ‘=’ operator. This was in an if statement, no less, so it took me a couple looks and one double-take to realize what was happening. C you upset me so much, I think we should break up.

Reference Counting is Really Nice

This is the first time I’ve really used reference counting myself, but I have to say it’s pretty awesome. I’ve heard some talk smack about it, but from my small experience working on the WEBVTT parser with it, it makes things so much easier. Especially since a lot of our objects are shared in multiple places in and outside of the parser. The problem that I found reference counting solves is when a project gets so big that is hard to tell where an object is used, especially when they are passed around a lot. So you might have the case where someone decides to delete an object that another piece of code is trying to use and it blows up. This isn’t even the worse case. The worse case is when you do this and it doesn’t blow up at first — so it’s landed. Then later, for what seems like a random reason, it will start blowing up because the use case where the code that you just landed causes the code to blow up is taking place. So now the reason why it is blowing up is not entirely clear at first. Reference counting makes this so much easier because when you are coding you can, in most cases, just assume that the calling function increased the reference count of the object when it passed it to your code, and so at the end of the code you can safely call the release function. No more explicitly managing the objects, your objects know when they need to be deleted. Due to this I’ve implemented reference counting on both the c nodes and our c string lists. Both of these structs are used extensively, passed around a lot, and are made available externally through the C++ wrappers. So it’s nice to have ref counting on them.

What Makes a Good Pull Request

One of the final things that really got reinforced to me over this last week’s sprint was the factors that make a good pull request, or patch. A good pull request is kind of like a good unit test. You want it to be focused in what it is trying to accomplish and atomic in scale i.e. it touches as little as possible. That doesn’t mean you can’t make pull requests that change like a billion lines. What it does mean is that those billion lines are all being changed for the same focused reason. Granted, there are times when these rules cannot be followed, but you should try to follow them in ≈99% of your pull requests.

My Pull Requests are Over 9000 (not really, I’m a nub)

If you’d like to check out the work I’ve been doing for the past couple of weeks you can look at my list of pull requests.

Next Week

Next week, March 14, we will be travelling to the Mozilla Toronto Office for our class. Our Professor, Dave Humphrey, wants us to have a demo and presentation ready to show the devs at Mozilla. So we will be sprinting on getting the last pieces of the puzzle for the WEBVTT integration locked in. This is mostly the integration code that I had to leave off on last week. I’m going to be getting a lot of help this time for that so hopefully we will get it done.

Until next time.