Web Development

I dreamt that I drove in pitch darkness with an empty fuel tank in the middle of nowhere only to get to my best friend so he could tell me about his idea for a startup. Now that I woke up, I still think it’s an incredible idea. Since both of us lack the resources to actually pull it through, I’m going to share it with the rest of the world and if anyone interested in picking it up, talk to me.

He named the idea “Among friends” and it’s essentially like classified ads but slightly more interesting, because it’ll have things like “I have 2 giveaway tickets for a show in Glasgow tomorrow night” or “any interesting hiking ideas for the weekend?” or “I’m looking for interesting vegan dessert recipes”. My friends can provide suggestions or link to other people posts (such as “I’m looking for tickets for tomorrow’s show!”). Imagine classified ads that are prioritised by your friendship relatedness.
The things you offer or look for can have an expiration date, making them more critical and important ; They can be location-based to make them more relevant, but the important part is that your actions get scored: If all I do is giving loads of shitty tips, I’m just spamming the system; but if I help people solve their problems, I become a valuable friend. If I re-post a friend’s problem or vouch for a solution I help resolving a problem.
The technical challenging part of building such a system would be understanding the text, in order to avoid context-based forms (“oh, a recipe? so fill in the ingredients and the instructions and tag whether it’s gluten-free or vegan or kosher…”). Forms discourage people. The system should work in plain language.
The future of social networks is in a turmoil as Facebook are checking the boundaries of what they can do and how can they profit, following their connection to the US election fiasco. Their business model has evolved tremendously over the years but in general, it always went towards the hyped “status” (or “public notifications”) as used by twitter. For that job, I think twitter is great (it was even better when it forced messages to be condensed) and I would leave that aside. I dare to think that the next stage in social networks evolution won’t be about what you got to say, rather then what you got to offer to solve my questions.

Originally this blog aimed to cover both technical aspects and the philosophy issues related to my Theodorus project (which I’m somewhat shameful to say isn’t really progressing as I’m busy with other stuff, such as blog-writing). So this post is going to be very technical and probably won’t interest some of the demographics, yet as I write to my own amusement you may choose to skip this one, or not.

As most of modern day programming languages (ES6 included) have classes, we’ve grown used to them, but as originally javascript works differently, I believe it forces us to a better understanding of what classes actually are. The original “C” didn’t have classes either, but it had “structs”, which is a collection of other structs and basic type variables, like integers and array of characters (later to evolve to strings). And then someone came up with the brilliant idea of adding methods into those structs, and thus C++, the upgraded version of C, had classes. So now we have classes which are strongly related to Object-Oriented-Programming (OOP). The idea is to encapsulate methods and information regarding a certain business-logic concept. This assures “safer” interaction between several concepts as they are forbidden from touch each other private parts.

OOP is bad. But our head is so wrapped around it, it’s kinda hard to think past it, and here is where javascript comes to the rescue. Javascript, being the only prototypal language, talks about different scopes, where each function creates a new scope on top of the current scope, while keeping all underlying scopes available, so this code will work:

(function external() {
  var foo = 1;

  (function internal() {
    var bar = 2;
    console.log (foo+bar); // output 3

  console.log (bar === undefined); // output is true

The “bar” variable existed only within the internal function and once it passed, it died. but note that “foo” variable was available in the internal function as well. How about this code:

(function the() {
  var foo = 1;
  (function plot() {
    var foo = 2;

    (function thickens() {
      console.log (foo); // output 2

    (function further() {
      var foo = 3;
      console.log (foo); // output 3

  console.log(foo); //output 1

As the scopes are stacked one on top of the other, each function will first find the nearest variable with the name it’s looking for. It’s worth mentioning, though, that accessing lower levels of the scope is more expensive than the immediate levels so it is better to pass the variables along.

However, despite creating layers of scope, the magical word “this” which is often used in many languages still points out to the enclosing object, and in our example – it’s the same object. But we can create a new enclosing object using the word “new”:

function Bar(){}

Bar.prototype.whoAmI = function () {

return this;


var foo = new Bar();

console.log(this); // output "Window"

console.log(foo.whoAmI()); // output "bar(){}"

you might have noticed the word “prototype” there. Javascript functions have an object attached to them called prototype, which is a collection of functions that all new instances of the enclosing object will “inherit”. And that’s great, because I can create an object, let’s say “foo”, which is an instance of “Bar” and if now I’ll add a new method to Bar’s prototype, foo will be updated as well!

function MyClass(){}

var instance = newMyClass();

console.log(typeof instance.myMethod); // output "undefined"

MyClass.prototype.myMethod = function () {
  return true;

console.log(typeof instance.myMethod); // output "function"

So unlike any other language, with javascript I don’t need to initially declare all the class’ capabilities. I can do it whenever I want, if I want. I can also have my class inherit several prototypes. so my Robo-dog can have both dog and robot attributes, and not forces to extend only one of them.

So finally, look at this code:

function MyClass() {
  this.constructor.apply(this, arguments);

  // list all exposed functions:

  return this.main;

// list all functions, internal should start with '_'
MyClass.prototype = {
  constructor: function () {
    console.log(arguments); // output: 1,2,3 along with other stuff
  main: function main () {
    return 'main function running';
  external: function external() {
    return 'bar'; 
  _internal: function internal() {
    return 'internal';

myInstance = new MyClass(1,2,3);
myInstance(); // output 'main function running';; // output 'internal' and 'bar'

This code has some nice gems in it as it obfuscate (but not fully hide) code you don’t want everyone to access. It’s puts away the constructor in a separate, more manageable, function; it clearly lists all the function we would like to expose, and allow us to have external name (in our case “foo” is internally called “external”) and if we expose only the created instance and not the class itself, no one would be able to see the code of “_internal”.

So in conclusion, Javascript classes are not so much about OOP-driven, rather than a tool to manage your code better. It’s not about working hours on end to define your interface and abstract classes only to realize the world isn’t modeled as you think (been there, done that), rather than a way to group functions together and say “hey, dataObject, I want you to bark().” –

DataObject.prototype.bark = bark;


This weekend I had the pleasure to attend the CancerDataDive Hackathon hosted by ProductForge at CodeBase. The general idea is gather around a bunch of young enthusiasts for an intensive work and try to come up with cool innovative ideas and proof-of-concepts. This is a great idea for an institute to get developers pay to contribute their skill and capabilities for its endeavor. I also learned that sometimes companies use Hackathon as a recruiting tool. To be honest, I think it’s actually a good idea as you get to see people work and interact with others in something that does feel like fun (as opposed to “feels like work”). I have some reservations as this intensive, incredibly-loud and overly dynamic environment doesn’t really represent real-life (and defiantly not my cup of tea), but still – coping with such intensity should qualify as a good trait.

My 2cents would go for the team-formation-part as I was rushing for another event and I tried to make it as efficient as possible. Fortunately, I immediately targeted the single person who mentioned he had an idea for a project and teamed up with him (later to be joined by 3 other gentlemen). The rest of the people were struggling to both find reasonable teammates but also come up with a startup idea at the same time. I think it might have been much more productive if we could first brainstorm ideas and then create teams based on commitments to ideas rather than “well, these folks don’t strike as psychos, now we need to think what we can”.

For whatever reason, my team decided to base our application on Meteor.js. As I admit that my preferred style of vanilla.js is not feasible for fast-pace project I agreed, hoping to learn more about this framework.

We didn’t try to publish to mobile app, which Meteor presumably allow so I cannot comment on that but I can tell that I found myself cringe as Meteor expect/allows you to write the db-access function in the front-end. In that sense, it doesn’t differentiate between client-side and server-side at all. This flaw isn’t crucial only when you have (near-to) unlimited bandwidth, otherwise your app will falter once you’re actually trying synthesize large volumes of data and send to the front-end only a sub-set. You, as the developer, won’t have the ability to handle it.

We actually came to this problem as reading data from the database happens presumably synchronously but in reality it returns an undefined value only for the function weirdly run in a loop until the data is retrieved. That’s probably one of the worst ways to hand asynchronous command. Instead, I would have advised to pause the code until a response is retrieved (mind not to hog the cpu, though – only the thread) or fork to a separate thread once the data is retrieved.

Another task I found unreasonably daunting is updating a the screen once it already displayed. Yes, I could simply write to DOM myself (which I eventually did) but as Meteor is based on Mustache.js, I didn’t find how to tell the template to re-run itself.

Lastly, accessing component’s variable kept changing depending on the current function – one time it’s this.variable, sometimes it’s and other times it’s Template.instance().variable. Weirdly enough Template.instance() doesn’t indicate which template is being referred so calling a template’s function from a parent function might introduce the wrong template’s scope. Ultimately, for the quick-and-dirty job required, Meteor pulled through, without wasting too much of our time, but for longer hauls I’d rather go Vanilla or any smart framework.

That said, I did enjoy rapid development. It’s incredibly reckless (no time for testing) but I understand why customers would like it as it provides results very quickly. We discussed the financial cost of medical errors, which amounts to roughly 25% of the ministry of health’s budget – and let’s assume this number is right for software bug fixing as well – how much time/resources should we spend on writing tests beforehand? The cap would probably be 25% so I understand why customers would like to save that portion of the money but in reality it’s going to be spent one way or the other.

However, what I learned from this project is that development is incredibly unimportant for hackathons, which is quite sad for the amount of developers that attended – The entire event revolves around thinking about cool ideas and pitching them in  1 minute talk and then a 6 minute presentation. You don’t have to – in fact, you’re expected not to – show your working application as experienced taught them that 72-hrs-worth code is too likely to break down. So my advise is to simply not code at all, rather than make a beautiful mockup and a presentation filled with pictures of cute puglets. Yes, your presentation should talk about your ideas feasibility – both in the sense of development and in the sense of legal issue. That’s why it might be helpful to have an engineer-mentor and lawyer-mentor to give advise but generally – Hackathons are for people who collaborate on ideas and not necessarily on code.

Following are my impressions and thoughts inspired by the “AR in Action” conference at MIT’s media lab to which I was kindly invited to this week by John Werner.
Augmented Reality” is the notion of adding an additional layer of data to our perceived reality. The most popular example for AR, as far as I could tell is Pokemon Go in which the character appear as in our real environment, but as the game was referred to several times during the conference, it is not a real AR since it doesn’t truly interact with the environment, rather than merely use it as a background to present its characters. But this is general idea – have some spectacles or a window (such as tablet) from which one can look at hers or his environment and get more information.
An interesting thought was proposed by Christopher Croteau from Intel that augmentation mustn’t necessarily be visual. It can also be audio – for example a running app that provides you audial coaching is actually augmenting to your running experience. A background music can also be considered as augmentation.
AR’s biggest advantage over VR or the standard way of consuming data is lack of need to disconnect from the presence. Along comes the famous photo of our generation, completely immersed in our mobile devices. completely disconnected from the “now”.
This made me wonder why is it so important to be in the “now”. “now” can be boring (especially now, as I sit in the airport waiting for my flight back home). True, mobile disconnect us for the immediate surrounding people, but then again – what’s wrong with that? Calm down with your “heretic!” calls, I would personally rather talk with someone I care about than someone who just happened to sit next to me, and I’m pretty sure it’s to the preferred choice of all parties involved. If someone prefers his virtual friends over your presence – I guess you’re just not interesting enough. I don’t really think that but I think it’s a thought worth exploring. but how AR can make this better? after all, I will still use technology to talk to my virtual friends and not the present next to me. The only difference will be that I will stare into nothingness like a weirdo instead of a screen.
The conference had plenty of speakers. More than a 100, according to the publications. Some of them preached to the choir about the wonderful potential of AR; others showed their work whether it was related to AR or not (some even without even trying to conceal the fact it’s completely unrelated. I should mention that it doesn’t mean their talks were bad, just unrelated). But from what I gathered, AR has three usages nowadays: (i) Show designs (e.g. architecture‘s work); (ii) provide instructions; and (iii) be cool. Being cool – such as provide 3D Pop-up to QR-code. It’s cool. it’s great advertisement. But being cool is something that has to be unique and it’ll become over-used and boring incredibly fast.
As the AR field is still emerging, the conference was also about VR, which is actually easier to implement, as you don’t need to understand the real environment in which the user is present. But VR has a huge disadvantage – it completely disconnect you from the surrounding. As one of the speaker came to the stage with a holo-sense on, I felt that he’s not really there, and didn’t really see a reason to be “there” as well. I think it has a lot to do with the emotional expression we provide using our eyes and eyebrows and once this is covered – we will just lose our audience.
Robert Scoble spoke about the “beautiful potential” of AR and how it will change our future. He pointed out three scenarios – mall-shopping, hotels and drivings. Personally, by the time AR will actually be useful, automated cars should take over (and every day that passes by and people die in car accidents is a disgrace to humanity). I’m not exactly sure what would he change in his hotel experience but the mall-shopping example bothered me. Especially as I don’t go to malls and I think that “look how much money many can be made of this” is an incredibly bad driver for innovation. It may be efficient but it’s still bad nonetheless.
There were few interesting demos of really useful AR in use for instructions and tutorials. But it reminded me of the story about NASA’s 10m$ investment to invent a pen that can write in zero gravity while the soviets simply used a pencil. It’s ok to experiment with the technology even when it’s not efficient but in order to solve real-world solution, its advantages compared to a low-tech solutions don’t necessarily have enough ROI.
Christopher Grayson suggested using AR to remember names (essentially by providing them digital “name” tags) made me think about the right to stay anonymous. This, should be mentioned one of the important reasons google glass failed. It’s true that I walked in the conference with my name tag on but this is actually an incredibly inefficient technique as it requires the reader to stand in front of me and make sure the tag isn’t flipped over (as it usually does) or covered by my jacket. Most like I’ll know that s/he’s taking interest of me and I would feel less susceptible to scams by a stranger who knows too much about me.
He took pride in having more than 2000 friends on Linkedin, while socially-speaking, we’re able to maintain only up to 1500 friends. I think it requires a redefinition for the word “friend” as it raises the question of the type of relationship one keep with his closest thousand of friends.
A word on technicalities. There were a few talks that were… ill-prepared. Whether it was the technology failing to display the presentation or demo on the big screen, or speaker who clearly didn’t prepare their talk and just rumbled on. Worse were those who weren’t even interested or at least funny. Rightfully said, it was mentioned by the organizers that future conference they’ll “audition” the speakers, so I’m optimistic on that regard.
I didn’t attend any panels but one, which I happened to stumble by as I was waiting for the following talk. This panel was about “Future of AR” and each panelist in his own words said, to my dismay, that the future cannot be predicted. They later continued to rumble but for me the picture was clear that the future is hazy. Personally I think the future of AR lies with an incredible smart AI and image recognition and processing. It will then be able to whisper useful information to help you make conscious decisions. In its evolution AR must and I cannot emphasize enough how critical it is – MUST get rid of the clunky VR goggles, it will never work with them. The alternative should be either the use of normal plain glasses and which the user’s pupils are still visible or at contact lenses that provide this information. Yes, we have a lot way to go, but that’s the future AR should aspire to.
A few honorable mentions: Bob Metclafe (the guy who invented Ethernet) and Dan Bricklin (the guy who invented digital spreadsheets), who didn’t actually talk about AR but are incredibly smart and entertaining; Gordon Bing from EA who showed how AR can be inspired by computer games; And last but not least, the guys from PTC that gave a few demos of AR that actually work efficiently.

TL;DR, My first thought about ES6 was “but you’re just making things worse!”. I think the thumb-rule for improving a language will it be easier to learn it and not more difficult, and clearly that’s not gonna happen when you keep adding more arbitrary tools to do the same things but slightly differently (best example for this is for…of which iterate over object’s iterable elements, as oppose to for…in which iterates over all of the object’s elements.

To cut things short – JS is missing official versioning, that will allow it to purge bad code. It actually does have some  versioning, because when you add a new feature it does mean that an older browser won’t support your code and additionally, we already have “use strict”; which is actually versioning. So instead of ‘use strict’, we’ll have ‘use es6’ and everyone know how to handle it. We can later think of backward-computability to weird folks who still use IE6 by trans-piling and stuff. That’s a different story and isn’t that complicated, especially as I’m aiming mainly to clean to language and less on adding new features.

So looking at First steps with ECMAScript 6, I compiled my own remarks/suggestions of how I believe things should be done:

1. Scoping: var, let, const, IIFE and blocks

Originally {…} was suppose to be a block that contains privately-own local variable. JS screwed this up by keeping the variable to the use of the external function. I’m not sure why, but now they try to patch it up by using ‘let’. So let’s make it much more simple – {…} has its own variables that die as soon as the block ends, unless they’re being used by an internal block that outlives the original block. This is how to should have been to being with. Fixing is better than patching.

Const” my be a nice concept but when talking about pointers, which we do in 99% of the time, it’s actually meaningless.

2. Template literals = `hello ${name}!`

It’s a nice feature, but to be honest, is it really critical to be a core part of the language? I agree that the ability to write multi-line string can be incredibly useful. if we could only enforce having semi-colons at the end of commands, are code will become much more concise and everyone should know that line breaks means nothing to the compiler. And again, I don’t think template-engines are wrong – I just don’t think they should be part of a core language. Keeping them as a separate library will allow them to evolve independently. Why evolve? because we might want conditionals, loops, sub-templates and millions other things. why limit it?

3. Arrow function

Array functions are less readable. don’t. just don’t.

4. Multiple return values

Functions return a single value. It’s a mathematical thing. This single value might contain an array, or a set of values. We might want to be able to easily parse to values (talk about splat in a second), but the bottom line is that function return a single value. Trying to return weird things like { obj1, obj2 } which is actually an abbreviation of { obj1: obj1, obj2: obj2 } create syntax anomalies which in turn makes the code less readable.

reduce the anomalies! stop adding more of them! On a side note, I never really understood why typeof and instanceof cannot be simply treated as functions. or why the are different from one another. inconsistencies is what makes any language dreadful. This is something I would have like fixed.

5. For (;;) => For (…in…) => forEach(function) => for (…of…)

So For…of is just like For…on, only it’s more useful as it actually return the iterable elements of an object and not all its elements (which might include functions, for example). we’re having a zillion of loop and iterations that one cannot deny that this is a money-pit and there’s never going to be a solution that makes everyone happy. And that’s ok – but why incorporate ALL the solutions in the language? it only makes it more complicated.

why can’t we simply say that object has an iterables property, returning an array of its iterables so “for (key,value in Object.iterables(map)) {}” would iterate over ONLY the relevant items and in each iteration key will be index and value will be the iterable object itself. there. problem solved without adding a new command.

We already have Object.keys, so it shouldn’t be a problem to add Object.values and Object.iterables.

6. Parameters default values

Avoiding the need to handle default values within the code is very nice, but it leaves the devil an opportunity to introduce hell when my default value is actually a function that runs… when?

This is one complexity I think we should avoid.

7. Splat, Spread, Splat and Handling named parameters

We’d like a feature that says – “hey, all these values should actually be part of an array” and vice versa – “hey, this object is actually is a bunch of separate variables”

Here comes splat – “” which is an ok idea. so why can’t we simply have the other way around

  function myFunc (...numbers) {

  return number[2];


myFunc (1,2,3); // return 3
function myFunc (values..., ...other) {

  return (first + second + other[2]);

myFunction ({first:1, second:2}, 3,4,5) // return 8

in my example “values” doesn’t exists anywhere – it’s created and immediately breaks apart to it sub-elements.

In the current proposal “…” is actually used for both scenarios – either to collect variables and to spread them. I suggest the position will hint it action – …collect, spread… making it much more readable

It’s worth mentioning that whenever you invent a new element in a language someone is very likely to use it in a way you didn’t expect. For example, what will happen if I write source…target. well, you guessed it, it will break source to different elements and recollect back them to target.

8. Method definitions instead of function expressions in object literals to

the ability to write var obj = { myFunction () { … } } is pure laziness and breaks the consistency of the code. That’s bad.

9. Classes and Class extends

There’s ruling paradigm called “object oriented”. But javascript isn’t about it. JS is about manipulating JSON objects. JS is perfectly fine without classes. stop forcing it into something it’s not. All JS apps start small and fast but as soon as they become robust, they also become incredibly slow. So, people, please trying to make complicated JS. you’re killing our web!

Prototypical development means that whenever I get a JSON from the server I can easily apply function unto it – cat.prototype = catBehaviour. so now the cat JSON I got can cat.meow(). I don’t really need to create a new object for that. why do you insist to make thing more complicated?

I agree that the current prototype mechanism is slightly too complex but why not simply fix it?

10. From objects to Maps

Javascript’s Object suffers from having string keys. not only that but there’s an escape issue with them. so ES6 introduces a new element type just to solve the escaping issue. seriously? I’ve never bothered with that. If you go ahead and fix that (and not clear why not update the existing object) – why not have the keys as any object (you can flatten with JSON.stringify internally if you want)

11. New String and Array functions

Yes, with ES6 you can now have string.startsWith(). but, seriously who care?

you do realise that because you now decided to use this stupid function, you’ll no longer support ES5-only browsers, right?

and maybe this is what it really comes down to – Languages should have extremely long cycles – let’s say update a language every 5 years if not more, in order to give it time to propagate everywhere. Javascript is the most important language de-facto in the world today not because it’s a great language, rather than because everyone uses it. If you make it into something that not everyone uses – they’ll just keep using ES5. All those small nice-to-have function should be on external layer, or framework. Let’s call the language coreJS, and this will be scaffoldJS, and this can be easily updated, let’s say every 2 years. On top of that we can have libraries that every developer decided which to use – a reasonable update time for this should be 6 months.

js-core should be super-stable, super-consistent with itself, super-reliable, super-simple (and not super-easy) and super-fast. once we can have that, we can start talking about the external layers or silly features like startsWith or arrow functions.

Passwords are troublesome. I can tell they’re troublesome because most website have the “Lost your password?” button readily available. Because password tends to be lost, or forgotten, or entered via keyboard with different layout (Try using the “£” symbol and good luck to all american-keyboards users).

Passwords are crackable, as majority of people don’t understand the likelihood of someone trying to break down their password. So to make the life of hackers slightly more difficult we now have the CAPTCHA mechanism, that aside from being already cracked by hackers is simply annoying. In fact, the “Lost your password?” is annoying too – you might as well call it “Annoyed? Click here”. Think of it this way – the user clicking this button is only one step away from not using the service.

And funny enough – this button is actually the solution to our problem. When you click button a two-steps authentication is initiated – usually via email which includes an unbreakable code that allows to update your password. Well, why just change password? Why shouldn’t it allow access to all the service?

With this reasoning, once you can you edit the password – the service is practically accessible and should therefore be so!

So instead of asking the user for his username (which is usually an email anyhow) and password – why not simply asking for the email?

So here’s the entire procedure

  • user open service, type in his email, click submit
  • A special token with time of creation and the user’s IP is created and sent to the user’s email
  • User click the link in his email
  • Token is sent back to the server and is verified that the IP matches and token hasn’t yet expired (let’s say one hour from token creation time)
  • A new token is created with user-id and user’s IP and sent to the user, encrypted
  • The user cannot decrypt this token but whenever he communicate with the server he passes this token along to authenticate himself

The browser should keep the authentication token for a reasonable time – let’s say 3-6 months, during which the user won’t have to go through this process again. This cookie should not and cannot be simply copied to another machine as it require the computer to have the same IP. And if you’re truly concerned with security, the service can ask the user for his public PGP key along with his mail and thus send him an encrypted mail only he can decrypt.

The only problem with the mechanism that I found is that it requires your user to temporarily leave the service and check his email. How many users will you loose because they forgot to return and how many will you loose because the password was just another hassle they didn’t care to handle?

Some might say javascript encourages nesting, for example;

var greetDeeplyCurried = function(greeting) {
  return function(separator) {
    return function(emphasis) {
      return function(name) {
        console.log(greeting + separator + name + emphasis);

Personally, I don’t like anonymous functions. They make it harder to debug. If you do use an anonymous function, please still give it a name. note the for

var bar = function () {…}

, the function is still anonymous. You should either write

var foo = function foo () {}


function foo (){}; var bar = foo;
  1. I don’t like creating function inside other functions, since I don’t know how many time the wrapping function will be called. Simply define two function and let the wrapping called the internal.

Nesting function is the lazy-solution to handle Javascript’s scope issue, as function don’t really hold a scope on their own one might find it easier to have all the scope nestled together so for sure you’d know, which scope is being used.

I would resolve that using bind, call and apply to determine the specific scope I would like my function to run. Generally I would avoid outside-the-function scope variables – each function should use only what is passed into it, and with the use of bind you can send parameters from different sources: in this example

myButton.onclick = foo.bind({},myVariable)

myButtons will have two parameters – myVariable and the onClickEvent, both coming from different sources.

That’s being said, My server-side has a lot of sequential asynchronous operations – load a topic and use its data to load the community and the member and then makes changes, write them and send the result to the client side.

I suggest a Sargent to manage our tasks on our behalf.

sergeant({ topic: { table: db.topic,
                         data: userNewTopic,
                         after: sergeant.validate }, 
           member: { table: membership,
                     load:{ userId: }, 
                     after: sergeant.isSuccess } ,
           savedTopic: { table: db.topic, 
                         after: sergeant.isSuccess },
           finalize: { json:true }},
           “topic,member,savedTopic, finalize”,  callback);


sergeant({ map of tasks },
           [ordered list of task name], methodToRunWhenDone);

A task has the following properties

  • table – a node-ORM object
  • load – either a object-id value or a node-ROM compliant map of properties
  • multiple – if the object exists, load will assume multiple results. Multiple can be a node-ROM compliant list of properties such as order, limit, order, and groupBy
  • data – data placeholder to be passed on or used by save
  • save – boolean flag. When save=true, data will be saved. The repository[taskName] will then store the save operation output.
  • before – function to be called before commencing the task’s main operation.
  • after -function to be called once the task’s main operation is completed
  • json – will run toJSON() function on items in the repository

before/after functions may return the following values

  • true/ undefined – continue operation as normal
  • false – the main operation will be skipped or discarded. If before return false, after will not  run as well
  • Error – sergeant will stop its operation and call methodToRunWhenDown(error)

load overrides data (so data acts as default value until load is pre-populated)

A Task with load but save=true will throw an error  before any task starts.

[ordered list of task name] can either be an array of tasks-names or a comma-separated string.

The before and after functions’ parameters are (repository, taskMap, currentTaskName)

json runs after the main operation but before post. Its value can be one of the following

  • true – will json-convert the entire repository
  • [] – (empty array) will json-convert only the current task result. if there’s no data or load properties for the task it will exit before running
  • [taskNames] (array of strings or comma-separated string) – will json-convert the specific tasks output. json is agnostic whether the task have run or not
  • undefined/false – won’t try to json-convert


If repository[taskName] is an array, json returns an array of objects, each was json-converted.

A good practice will be to run post the last json and delete any unnecessary temporary repository elements.

so in conclusion, Sergeant.js is used to run a list of db-related tasks (which seems to apply to most of my server-side methods) in a sequential order, thus saving the need to nest your operations in an unreadable fashion.