Tagged: software-development

CSS Parsing: Writing-mode

One of the things I’ve been working on since I got back from Taipei has been helping with the implementation of vertical-text in Gecko which WEBVTT needs to support. The small way in which I helped out was to implement the “writing-mode” property in the style system. Basically this is just getting the Gecko CSS parser/scanner to understand the new property and be able to process it correctly. This was fairly easy to implement due to Daniel Holbert (:dholbert) giving me an awesome walkthrough of the things that would need to change in order for this to work and also an example bug where he did a similar thing. So after half a day doing the initial code and a few rounds of review it landed in the tree and we’re now one step closer to vertical text!

If you’d like to learn more about how the style system works you can head over to the style section of the Gecko overview page which also has a lot of really good information on how Gecko works in general. I won’t duplicate information here, but I will talk about a couple of the things I found interesting in the style system.

  • CSS property values are stored in a computed form and there are translation mechanisms to go back and forth between the two.
  • The computed values of CSS properties live in structs, namely nsStyleStructs, and CSS properties that tend to be set together live in the same struct. For example — when a user sets the font-family property they are likely to also set the font-size, font-weight, etc, so storing the computed values of these properties together makes sense.
  • Single instances of nsStyleStructs, i.e. one set of related properties, are shared across many different DOM nodes as DOM nodes are more likely to share one set of CSS properties then they are to have different sets of properties. For example, the vast majority of times that a page sets font properties the entire page will share these properties. This cuts down on memory usage. These structs are immutable and when the same CSS property needs to be set differently on another set of DOM nodes a new struct will be created for the new set of properties.
  • Each nsStyleStruct has a CalcDifference() function that gives hints to Gecko about when it needs to update the rendering of the page based on the CSS properties changing, i.e. if it needs to reconstruct the frame or the text needs to be reflowed, etc.

In the future I’m hoping to help with more layout things. Hopefully, I can do the same kind of bug for the “text-orientation” CSS property, which is also needed for vertical text. I haven’t started as of yet as it’s still kind of up in the air in the spec. Right now I’ve also started work on another easy bug to reorganize the reftests in layout/reftest/form. Hopefully, I can get that done sometime next week. We’ll need reftests for WEBVTT as well so this will be a good bug where I can learn about them.

WEBVTT: Farewell DPS911

Tomorrow is the last day for my open-source class at Seneca. So this will be the last WEBVTT post that I will make for the class, ever. It’s been a long journey since last September and we’ve made huge progress, learnt a ton, burnt out many a time, and had a great time doing it. If you are worried about no more posts on WEBVTT fear not! I’ll still be posting regularly on WEBVTT as I’ve now switched over to working on it and possibly some WebMaker stuff at CDOT for the next year. I’m really looking forward to it.

Now, lets get on with it.


WEBVTT Parser

It’s been pretty exciting around WEBVTT in the last month or so — ever since we did a presentation at Toronto Mozilla we’ve received a lot more interest. It’s a pretty cool and strange feeling to have people interested in what we’re doing. Especially with WEBVTT. It’s not very glamorous, as you can imagine. Myself and a few of my classmates also went to an “Open web open mic” night at Toronto Mozilla where we got to do another presentation and showcase WEBVTT off in a kind of science fair environment. We also got to see lots of great presentations and projects that are being worked on. It really opened my mind to what is going on in Toronto and beyond. Pretty cool stuff.

We recently got all our tests green! At that point we officially tagged a revision of the parser as version 0.4… so lots more work to do. Since then we’ve been adding more refined and atomic unit tests to the test suite. Most of them are testing our internal functions in the library. I’ve been focusing on the cue text tokenize  functions for these. Instead of passing in an entire WEBVTT file, we pass in input that it will be expected to handle and test to make sure it behaves correctly. We’ve also been solving a few of the bugs that have been found via fuzzing WEBVTT, courtesy of cdiehl and rforbes,  in our integration branch. That’s awesome — we’re getting fuzz tested on something that has not even landed in Nightly yet! Caitlin has also started to add the ones we have solved as regression tests.

Other than that not much has happened on the parser lately as we’ve all been crunching through the last assignments and exams of the semester. We’re probably going to be looking where to enhance the library in the next little while. There are some issues up on the repo right now that still need to be taken care of in regards to enhancement. So we’ll probably be tackling those first.

Gecko Integration

The other big thing we’ve been working on still is getting the parser integrated into Gecko. I’ve probably already blogged before about how we have 2 out of the 5 things we need landed in Nightly already. The last three things we need to land to get basic functionality working are the DOM classes, DOM tests, and the “parser management” code.

Moving Code from WebVTTLoadListener

Around the time of the demo it was decided that we should move the code that converts the c parser data structs to DOM classes out of the WebVTTLoadListener and just use the LoadListener for… well, listening. The LoadListener’s job should be to serve as the point of contact between Gecko and the WEBVTT parser. When it receives data it hands it to the parser and when it receives a cue it constructs a TextTrackCue and hands it to Gecko. I recently got around to that here. The TextTrackCue is the place where  the conversion code now lives. We also now lazily load the parsed WEBVTT nodes into HTMLElements when GetCueAsHTML() is called for the first time.

Properly Creating Nodes

We ran into a problem where processing cue text tags like <i>, <u>, <b>, etc, was crashing the browser. This was due to the fact that we weren’t creating the NodeInfo to be passed into the NS_NewHTMLElement() macro properly. We were just passing in HTMLTrackElement’s NodeInfo. This would cause HTMLTrackElement to be deleted when the HTMLElement was removed from the divs child list. The correct way to do this is to get HTMLTrackElement’s NodeInfoManager() and create a new NodeInfo using it.

Removing Children

We were having a bug where we weren’t removing captions from the div properly. Previously we had been looping from zero to max length of the divs children and removing at the current index. Classic for loop. I tried and tried to figure out what was going wrong and after a while I made my way over to #content to get some help. bz and Ms2ger were kind enough to help me. What I learnt from them is that removing children of a node using this method only removes every other node. This is due to the fact that when you remove a node that isn’t at the end of a list, the entire node tree is shifted down. Therefore, when we remove node at 0 node at 1 becomes node at 0, we then advance to 1 and remove node at 1 missing the node that was shifted! The first solution we thought of was to loop until length is 0 always removing at 0. However, we ended up using another solution that I would never have guessed. That is to instead call nsContentUtils::SetNodeTextContent(). This removes the tree for you and  puts in its place a TextNode. For our solution we just pass in an EmptyString() for the text.

nsINode > nsIDOMNode

The other thing they asked me to do was to change how we were appending nodes to the tree. Instead of using nsIDOMNode interface, this is a slower and more inefficient interface, we should use nsINode. Which has basically the same capabilities. We can do the exact some thing with nsINode in simpler code.

Patches

I submitted a patch tonight that has the most up to date code in it in regards to “WEBVTT parser management” in Gecko. I was hoping we could get this landed quickly, but the events of today have brought up even more work to do. First of all, the patch for DOM classes that we thought would get through pretty quickly has a lot of problems with it, and secondly, the cue text tag class to css selector mapping in Gecko is not at all as simple as I suspected it to be.

I found this out today when trying to get the CSS selectors working on the HTMLElements created from cue text tags. I had all the Gecko code working correctly, and yet the CSS selectors in my external CSS file were not affecting the captions. I went over to #content where bz and Ms2ger informed me that it was because we are constructing them as anonymous content. In other words, no external code can touch the HTMLElements we are creating, only internal code can. This wasn’t the behaviour that I thought was needed and after some discussion #whatwg’s zcorpan informed us that they need to live in a video::cue pseudo-element as part of a sub-document. So in your external CSS selectors you would put video::cue([tagname].[classname]) { }. However, bz said that in order to get a new pseudo-element we would need to do some ‘re-architecting’ of Gecko code. This immediately made me feel nauseous… just kidding, kind of.

In light of this our new goal is to get our current semi-working code into Gecko behind a pref and than iterate on it. Things will be a lot easier when we get the first code landed.


That’s about it as far as I can remember. We’ve done a lot of more little things since than as well. Head over to Mozilla’s WEBVTT repo on github to check out all the changes. And feel free to get on irc.mozilla.org #seneca to co-ordinate with us if you want to help!

Until next time.

WEBVTT: Unit tests, unit tests…

I haven’t written in a quite a while so this is going to be a pretty long post.

TLDR: I made some progress on the Gecko code for the WEBVTT parser integration, but was unable to finish because I was moved over to finishing the WEBVTT parser exclusively as we needed to move fast on it. This resulted in unit test fixing galore.

Gecko Integration

I was working on the Gecko integration stuffyo before I got pulled away and made some decent progress on that. I got some feedback on my first WIP and I addressed most of the issues that the reviewers pointed out. A lot of it was minor nits like code style or incorrect code that I just needed to remove. However, there were a few big things:

  • Getting the OnDataAvailable() function to use the ReadSegments() function of the stream it is passed. To do this you need to create your own function that conforms to the function prototype of nsWriteSegmentFunc. What that does is it allows the nsIInputStream to read in your buffer for you and then you can just hand that buffer over to whatever you need to. This prevents memory leaks as it creates a central location where the memory leak would happen, and presumably, the code in the tree is already written for you and probably better then what you’ve written — so just use it.
  • Use an nsAutoRefTrait to manage the lifetime of the nsAutoRef<webvtt_parser>. That ended up not being so hard. What an nsAutoRefTrait of xType does is define what an nsAutoRef of xType needs to do during it’s lifetime. In our code, for example, all we need to do is define a Release() function that tells the nsAutoRef pointer what to call when it is being released. There are a few more behaviours that you can define like what do do on setup, etc. This is necessary since most things in the Gecko code are all smart pointers and objects, so you don’t want to manage their lifetimes explicitly, but you do want to provide a way to define the appropriate steps it needs to do during its lifetime. The awesome thing about this is that once you create an nsAutoRef of xType the nsAutoRefTrait of xType will be automatically linked to it.

The other couple of things I did before I submitted my last WIP for feeback was fleshing out a lot of the code that will convert the c structs from the c parser to DOM elements, getting the code that deals with DocumentFragments working correctly, and a few other minor things. You can check out the full WIP that I submitted for my last review here.

Driving Issues

The other thing that I was really trying to do up until this point, and currently, is drive the issue list. I’ve noticed that the issue list tends to stagnate if no one is there monitoring it. To this point I have been looking over the issue list, about once every other day, and closing issues that need to be closed, i.e. onces that have been resolved, seem to be going nowhere, or seem to have gone as far as they are able to, as well as trying to push issues that seem to have stalled by @ mentioning the parties involved, or asking a question about where we should head in light of the issue. This process has been very helpful because it has enabled us to keep our issue list from being cluttered by issues that are no longer needed and to also drive issues that people might have forgotten about.

Parser Unit Test Debugging

About three weeks into working on the Gecko integration as well as the parser  I was asked to step back from the Gecko integration for a while and to start working exclusively on getting the parser to pass all the unit tests. To that end, I relinquished my duties on the Gecko integration for a bit. My first job was to help Caitlin get her massive patch split up and landed. To do this she asked me to write a finish parsing function that would allow the patch to be split up more easily. After that was done I started to work on the payload unit tests, as I anticipated Caitlin’s cue code to be landing soon. I was able to get around 90 % of the payload unit tests working over reading week, valgrind and all, and solve many bugs in the parser. A lot of the bugs that were effecting the payload tests were also effecting the cue tests, so I was able to hit two birds with one stone in many cases.

The Importance of Passing Unit Tests

I came across a situation in this sprint where I re-discovered the importance of passing unit tests. How it all started was myself fixing a problem we were having with a function that we use to attach a parsed cue text node to another parent node. The function kept segfaulting and it was pretty easy to figure out what had been happening — it was dereferencing a variable before allocating memory for it. To fix this I allocated memory before hand, as well as rearranged the function a bit to simplify it and make it look cleaner. The code got landed and all was good, or so I thought. Later I uncovered that some of the valgrind errors we were getting in another unit tests were actually caused by what I had landed earlier. It was an easy fix, but it just goes to show how important passing unit tests are. It’s kind of impossible to tell if the code you write breaks anything when everything is broken to begin with.

The Art of Face Palming

As always this last week’s sprint had its fair share of face palming. Over the years I’ve gotten pretty good at it — you might even call me a face palm artist. This weeks face palm that takes first prize was an error we were getting in the grow function for strings where we were using the ‘==’ operator when we really wanted the ‘=’ operator. This was in an if statement, no less, so it took me a couple looks and one double-take to realize what was happening. C you upset me so much, I think we should break up.

Reference Counting is Really Nice

This is the first time I’ve really used reference counting myself, but I have to say it’s pretty awesome. I’ve heard some talk smack about it, but from my small experience working on the WEBVTT parser with it, it makes things so much easier. Especially since a lot of our objects are shared in multiple places in and outside of the parser. The problem that I found reference counting solves is when a project gets so big that is hard to tell where an object is used, especially when they are passed around a lot. So you might have the case where someone decides to delete an object that another piece of code is trying to use and it blows up. This isn’t even the worse case. The worse case is when you do this and it doesn’t blow up at first — so it’s landed. Then later, for what seems like a random reason, it will start blowing up because the use case where the code that you just landed causes the code to blow up is taking place. So now the reason why it is blowing up is not entirely clear at first. Reference counting makes this so much easier because when you are coding you can, in most cases, just assume that the calling function increased the reference count of the object when it passed it to your code, and so at the end of the code you can safely call the release function. No more explicitly managing the objects, your objects know when they need to be deleted. Due to this I’ve implemented reference counting on both the c nodes and our c string lists. Both of these structs are used extensively, passed around a lot, and are made available externally through the C++ wrappers. So it’s nice to have ref counting on them.

What Makes a Good Pull Request

One of the final things that really got reinforced to me over this last week’s sprint was the factors that make a good pull request, or patch. A good pull request is kind of like a good unit test. You want it to be focused in what it is trying to accomplish and atomic in scale i.e. it touches as little as possible. That doesn’t mean you can’t make pull requests that change like a billion lines. What it does mean is that those billion lines are all being changed for the same focused reason. Granted, there are times when these rules cannot be followed, but you should try to follow them in ≈99% of your pull requests.

My Pull Requests are Over 9000 (not really, I’m a nub)

If you’d like to check out the work I’ve been doing for the past couple of weeks you can look at my list of pull requests.

Next Week

Next week, March 14, we will be travelling to the Mozilla Toronto Office for our class. Our Professor, Dave Humphrey, wants us to have a demo and presentation ready to show the devs at Mozilla. So we will be sprinting on getting the last pieces of the puzzle for the WEBVTT integration locked in. This is mostly the integration code that I had to leave off on last week. I’m going to be getting a lot of help this time for that so hopefully we will get it done.

Until next time.

Connected Wellness: Build problems..

For the last week I’ve been trying to fix some Xcode build problems that we’ve been having. After I upgraded the project to 2.3 Carl tried to run it on the simulator and it didn’t work. After doing a lot of digging I figured out that I only provided an armv7 static library of Cordova. What this meant was that we couldn’t run it on the simulator because it runs on the Mac which uses an x86 processor. What I ended up having to do was add two versions of the library to the project, one compiled for x86, and one compiled for armv7, and then make the project conditionally link them in based on if we were building for x86 or armv7. This was kind of tricky because I didn’t know at first how Xcode manages this stuff. I finally discovered that you can add a build condition underneath other linker flags in the target’s properties that will allow you to do this.

After this was all done I had to take care of adding in the 2.3 version of the AppDelegate and ViewController because these weren’t in the project when I switched over to 2.3. They had been taken out, by me I believe, I forget why. Once that was done I began to get some problems with the linker not being able to resolve some of the symbols in the Cordova library that had to do with extending classes. Fixing this required me to add “-force_load” to the linker flags. This specified to Xcode to load the entire library, whereas if you don’t put that, its default mode is to not load extensions… strange.

There’s still one last problem with the build that Carl has informed me of. The problem isn’t currently surfacing on my own machine so we’re going to have to do some more digging.

Final, thing that I did was attempt to simplify the Command hierarchy by making ReturnableCommand just a Returnable and then having BluetoothCommand implement the Command protocol. This simplifies it a little because ReturnableCommand no longer has to be an abstract class. It also makes sense because an object that is returnable to the Cordova library isn’t necessarily a command. This way in the future it will be possibly to have many different types of objects be able to return a plugin result back to Cordova.

Connected Wellness: Upgrading Cordova 2.0 to 2.3

We’ve gotten quite a lot of work done this week in the Connected Wellness project. However, we’re getting close to a point where we might not be able to do any work until we get Apple Developer licenses. Most of the preliminary development has been done and we now need to start testing cod; to do that we need those licenses! I heard from Carl that it’s being worked on and we should have them soon, so I’m hopeful.

Other than that we had a big review on Catilin’s packet parser implementation. I probably spent a few hours just reviewing that alone. I also went around cleaning up a few of the issues such as removing commands that we aren’t able to implement yet due to not being able to fully control Bluetooth on Apple Devices and implementing the last few commands.

The biggest thing that I’ve done this week is to include the Cordova library into the mercurial repo and get it set up so that it is incorporated into the Xcode project out of the box. This way we don’t have to set it up manually every time we clone a fresh repo (that was getting tiring). It took a little bit of figuring out how to do this, but what ended up working was to include the library file along with the header files of the library (Apple requires header files for statically linked libraries to be somewhere in the project). After I did this I needed to add the folder that holds the library and header files to the required frameworks list of the project, modify the header search paths to include the directory where the library’s headers are, and set the linker up to link with the library by giving it the path to the .a file. All these options can be found in the “build settings” tab of the build target in Xcode i.e. the project file.

After I got the library incorporated into the repo I went about moving our current code to support Cordova 2.3 (we were on 2.0 before). This proved to be very easy. The only things that changed were the include paths of the Cordova headers when #import-ing them, the function signature of the method that Cordova calls in the plugin, and the way that Plugin Results are passed back to Cordova. It was actually easier moving to 2.3 than it was to code 2.0 because they’ve made it a lot more easier and intuitive to work with.

You can check out that code in the pull request here.

After that pull request lands there’s only going to be one last thing to code – the stopListeningCommand. That will probably not be hard. I’m hoping that we will be able to get those Apple Developer licenses early next week so we can continue powering through this. It is due by the end of February…

WEBVTT Update: Parser Review, Integration into Firefox

For the last two weeks we’ve been working steadily on the WEBVTT parser. Most of the work being done now is related to getting the parser integrated into Firefox. We’re building on top of the original bug filed on bugzilla by Ralph and are now using it as a co-ordination bug for the five other bugs we’ve split it up into. The “bug sections” that we’ve split it up into are:

  • Integrating the webvtt build system into the Firefox build system.
  • Adding a captions div to an nsVideoFrame so the captions can be rendered on screen.
  • Creating a “Text Track Decoder” that will function as the entry point for Gecko into the webvtt parser.
  • Creating new DOM bindings i.e. TextTrack, TextTrackCue, TextTrackCueList.
  • Creating DOM tests using Mochitest for the new DOM bindings.

You can check out a more in depth break down of our bug plan here.

The other major thing that a few of us in the class have been engaged in is the review of the 0.4 parser code. The review is still in it’s early to mid stages, so we have a lot more to do on that. I’ve been participating there by filing and commenting on issues and fixing a few of the bugs that have surfaced.

We’ve also moved the parser code over to the mozilla webvtt repository on GitHub (yay!) and have landed the 0.4 parser code there in a dev branch. After the review is done it will be landed on the master branch.

Firefox Integration

I’ve been working on the Text Track Decoder for the parser integration into Firefox. This part of the integration functions as an entry point into our parser for Gecko.

How It Works

The short version of how the Text Track Decoder works is that when an HtmlTrackElement receives new data from a vtt byte stream it passes it off to it’s WebVTTLoadListener (Text Track Decoder) which then calls our webvtt parser to parse the chunk of the byte stream it just reveived. The WebVTTLoadListener also provides call back functions to the parser for passing back cues when the parser has finished them or for reporting errors when the parser encounters them. The final function that the WebVTTLoadListener facilitates is converting the cues that have been passed back in the call back function to the various DOM elements that represent a webvtt_cue and then attaching those to either the HtmlTrackElement’s track, in the case of the cue settings, or the HtmlTrackElement’s MediaElement’s video div caption overlay (phew), in the case of the parsed webvtt cue text node tree.

What We’ve Done

The first order of business that we took care of in getting this done was to ask Chris Pearce, who works very closely with Firefox’s media stuff, to give us a high level overview of what we would need to accomplish in order to get this working. That was sent in the form of an email which my Professor, Dave Humphrey, then kindly posted on our bug (I forgot to do so!).

We then quickly went about implementing Chris’s initial steps that he talked about. We’ve done steps 1 – 4 so far:

  • The HtmlTrackElement::LoadListener has been moved to it’s own file and renamed WebVTTLoadListener.
  • The HtmlTrackElement now has a WebVTTLoadListener reference which is initialized in LoadResource.
  • WebVTTLoadListener now manages a single webvtt parser which is created and destroyed along with it.
  • WebVTTLoadListener now provides call back functions to the parser for returning finished cues and reporting errors.

We’ve also added three convenience functions to turn webvtt cue stuff into the DOM bindings. These are:

  • cCueToDomCue – Transforms a webvtt cue’s settings into a TextTrackCue (almost done).
  • cNodeToHtmlElement –  Transforms a webvtt node into an HtmlElement; recursively adds HtmlElements to it’s children if it is converting an internal webvtt node (not done at all!).
  • cNodeListToDocumentFragment – Transforms the head node’s children into HtmlElements and adds them to a DocumentFragment (pretty much done).

The call back function for returning cues now:

  • Calls the cCueToDomCue function and adds the resulting HtmlTextTrackCue to it’s owning HtmlTrackElements cue list.
  • Calls the cNodeListToDocumentFragment and adds the resulting DocumentFragment to the caption overlay.

Right now we’ve run into some problems in figuring out how to work with the Firefox code. I’ve listed those in my recent WIP update on the bug. Other then implementing those steps I’ve just been getting acquainted with the Firefox code that we have to touch and figuring out the basics of how it’s all going to fit in. I think we’ve gotten a big chunk of it done so far, mostly the overall frame of how it’s going to work as well as turning a webvtt cue’s settings into a TextTrackCue. I’ve also met the deadlines and goals that I set for myself at the beginning of this semester, so I’m fairly happy. Going forward I think I know enough now to ask intelligent questions about how to solve the problems that I listed in the WIP, so that’s what I will be doing in the coming weeks when I get stuck.

As always, I’m ever confident that we will finish the project!

Connected Wellness: Threads, Cordova Lib, and Java to iOS

So it’s been another eventful week for the Connected Wellness iOS team at CDOT. We’ve made a lot of progress towards having the project completed, but there are still some major outstanding issues that we need to tackle. The PacketParser has yet to be completed (one of the most complex portions of the plugin), we’ve still yet to hear back from A&D about low energy BT devices, which we need in order to use the CoreBluetooth framework provided by Apple, and we’re still running into some problems regarding the amount of control that the CoreBT framework gives us. The CoreBT framework does not expose a way to start, stop, or enable Bluetooth in the device. These are all things that the Android plugin is able to do. Although it is not completely necessary to implement these (the user can start it manually) it would be nice to provide it to the user as a convenience.

Threads

To continue my discussion from last week in regards to the Invoker and dispatching Commands asynchronously on threads – it has been ridiculously easy to implement this. Instead of going through and creating our own WorkQueue and thread management classes I’ve implemented NSOperationQueue and NSInvocationOperation inside of the Invoker class. NSOperationQueue is provided by Apple and has all the functions to properly manage thread execution on a work queue and NSInvocationOperation allows us to specify the target and selector that the threads started from the work queue will invoke. Check out how easy it was to do here.

Cordova Lib

During this last week we ran into a big problem of everyone checking code into the repo that wasn’t compiling or causing problems. Yesterday I went through and fixed all the compilation problems. We’re planning to have a more strict review process in the future. One of the major problems that I encountered was the Cordova Lib not being properly compiled and linked to the project. This was caused because for some reason Cordova 2.0 doesn’t support armv6, so I had to remove that from the Cordova project as well as the iOS port project. The other big issue is that currently the Cordova library is not included in the repository. This means that when a fresh copy is cloned you have to go through the process of manually downloading PhoneGap and incorporating it into the project. So in the next day or so I’m going to set up the repo to include the compiled archive and point the project to it by default.

Java to iOS

The other major problem that I ran into this week was specific to the port from Java to iOS. In the Java version the concrete commands reside as inner classes in the DevicePlugin class. Due to this they all have access to the BluetoothServer stored within the DevicePlugin class. Moving over to iOS we cannot use inner classes and so the commands need to be provided with access to the server in some other way. To compensate this I’ve introduced a new command, BluetoothCommand, which sits between the concrete commands and the ReturnableCommand. BluetoothCommand exposes the base plugin in the ReturnableCommand as a MedicalDevicePlugin as well as exposing a property on it to access the BluetoothServer on the MedicalDevicePlugin. You can check out that here. Now that I’m thinking about it a BluetoothCommand doesn’t necessarily need to work with a MedicalDevicePlugin… what we’d need to do then in order to abstract it more is to create a base class like BluetoothPlugin that will expose a BluetoothServer property. I don’t really know if we should do this though as I’ve read a lot recently about the dangers of over designing code. I’ll post back next week about what comes from this.

Next Week

By the end of next week I’m hoping that we have most of the pieces in place to start testing. The PacketParser will be done and the few other commands that are left to be implemented will be done. Personally I will be working on some of the outstanding commands such as the listen and isSupported() commands.