The thing I really like about spreadsheet applications (Microsoft Excel, Apple Numbers and Google Sheets) is that it’s programming in a different kind of thinking: instead of the standard procedural programming (things happen one after the other) or the event-driven Object-oriented programming, spreadsheets are having all the calculations done at the same time to create a single state. There are no loops or any kind of code-branching, no user-interaction during the computations, it’s just – “given this input, this is the new state”. Of course, when cells refer to one another you can simulate iterations but loops are forbidden – as you can refer back to a cell that was already set. It require a different mindset.

The history of spreadsheet is fascinating on it own and it’s worth listening to Dan Bricklin’s story on Ted, and you can dig even further with Joel Spolsky‘s “You suck at Excel” talk. Excel is (or at least was) amazing and product designers should learn from it how to create a proper learning-curve application where anyone can do something with it.

A diagram of a learning curve

And it was only natural that other major players will thrive to have their own spreadsheet and although Excel is still a major player I think it has lost its lead to google spreadsheet – which is onlinefree, (and therefore easy-to-share).

I recently had some fun working on spreadsheet for a friend, who is a dance teacher. He teaches 3 levels (beginners, intermediate and advanced) dance classes on a weekly basis and so – he keeps track of his students and their progress in a simple spreadsheet. Each row is a student and each column is a day; In each cell he will write which class the student took “A” for advanced, “B” for beginners and I for intermediate.

a simple table whereas each row is a student and each column is a day

Table was pretty straightforward so he wanted to count how many classes did each student attend and how many attendants he had each day. it’s slightly tricky because each day actually had multiple entries and each class should be counted separately. Of course, we can have each class occupy its own column but it would mean the table will grow 3 times its width. so how can we still count each class? simple, by concatenating the entire row into a single cell and then counting the characters:  =len(concatenate(C3:C18)).

same table as before but not summary row and column for number of attendees

If we wanted to count how many students attended the beginners class, that’s also easy, we’ll use the same formula but this time will remove all the characters that are not “b” using regular expressions:  =LEN(REGEXREPLACE(concatenate($C7:$I7),"[^B]","")). And if we want to know how much money did he make each day will just multiple the number of attendees with the fee. Super-simple.

Things got more complicated when he started selling discounted blocks of five entrees for a cheaper price. The question I was faced is how to deal with that. Or more actually, I was asked to add a column that will tell us how many discounted entrees the student hadn’t already use. After some thoughts I couldn’t figure how to simply tell each class how many leftover unused block-entries I have left, so I created another page (called “db.blocks” where I could store additional information per each student’s session – namely, what’s the left-over. Let’s say each time the student buys a block will mark it as “@” and using a similar technique as before will count all the “blocks” characters (times 5, number of entries per block) and deduct the entrees that took place. We’ll also read the leftover entries from previous class and make sure we don’t go below zero (no unused block entries). We’ll end up with this:  =Max(0,C7+LEN(REGEXREPLACE('attendance'!D7,"[^@]","")LEN(REGEXREPLACE(attendance'!D7,"[^BIA]",""))).

 

That worked well for quite some time; but then he decided to introduce different sorts of blocks – longer (with a bigger discount); shorter; and concession (for university students). so looking back at the previous problem, we had to retain to pieces of information per cell – the number of unused entires and the original input. Now, it’s no longer enough to keep the number of unused entries, but we also need to retain their type. so instead of a single number, we should remember an array of types – or better yet – the cost per entry. Let’s say normal fee is 6£ but I bought a block of 4 entries for 20£ so we should retain “5,5,5,5”.

Our Formula was quite complicated to begin with but now, with different kind of blocks, in different sizes and different prices, it became quite unmaintainable. And here is something that is missing from modern day spreadsheets. I wish I could encapsulate my entire formula into a simplified function. would it be just great if I could enter in each cell in my “db.blocks” =process(prevCell,'attendance!c7',blocksDecRange) and it will do all the magic behind the scene? To be fair, Google Sheets does that but the function must be super simple and run a small number of times, whereas . I need to run to my formula on every cell in a very table.

But Sheets does have a nice scripting support (so does the other spreadsheets), so instead I used the onEdit(event) {...} hook that happens whenever the user changes a cell. I first verified the targetCell is within the range of my attendance table and then processed it and wrote the remaining entries in one table and the amount spent on that day in another sheet. This allows me to super-easily calculate how much money did the teacher make in a day and how much money each student spent. When reading each day’s set of character, we’d try to figure whether this character represents a block and if so we’d add to our “credit” stack the entries’ costs; else we’d remove entries from the stack and if the stack is empty – it means the student will pay a full price. Here’s the complete code, written in some old version of javascript:

function onEdit(e){
  var spreadsheet = e.source,
   range = e.range,
   rowIndex = range.getRow(),
   colIndex = range.getColumn(), 
   blocks = getBlockMap(spreadsheet),
   attendanceTable = spreadsheet.getRangeByName('attendance'),
   offsetCell = attendanceTable.getCell(1,1),
   offsetRow = rowIndex - offsetCell.getRow(),
   offsetCol = colIndex - offsetCell.getColumn();
 
  // Exit if we're out of range
  if ((e.range.getSheet().getName() !== 'נוכחות') ||
    (rowIndex < attendanceTable.getRow() || rowIndex > attendanceTable.getLastRow()) ||
    (colIndex < attendanceTable.getColumn() || colIndex > attendanceTable.getLastColumn())) {
    return;
  }
 
  onEditUpdateCell(spreadsheet, attendanceTable.getValues(), offsetRow, offsetCol, blocks);
}

function onEditUpdateCell(spreadsheet, values, row, col, blocks) {
  var dbEntries = spreadsheet.getRangeByName('db.entries'),
    dbSpent = spreadsheet.getRangeByName('db.spent'),
    prevCredit = col ? getCredit(dbTabs, row, col - 1) : [];
 
  updateCell(prevCredit, values[row][col], dbEntries.getCell(row + 1, col + 1), dbSpent.getCell(row + 1, col + 1), blocks);
}

// get previous class unused entries as an array
function getCredit(dbEntries, row, col) {
  var value = dbEntries.getCell(row, col).getValue();

  if (value === '') {
    return []
  }

  return value.split(',');
}

function getBlockMap(spreadsheet) {
  var map = {},
    table = spreadsheet.getRangeByName('blocks').getValues();

  for (var i=0; i< table.length; i++) {
    map[table[i][4]] = {
      size: table[i][2],
      price: table[i][0]
    };
  }
 
  return map;
}

function updateCell(credit, value, tabsTarget, spentTarget, blocks) { 
  var spentToday = 0
 
  for (var i=0; i< value.length; i++) {
   var c = value.charAt(i),
     block = blocks[c];
 
    if (block !== undefined) {
      addToCredit(credit, block.size, block.price)
    } else if (credit.length) {
      spentToday += credit.pop();
    } else {
      spentToday += blocks['fullcost'].price;
    }
  }
 
  tabsTarget.setValue(credit.join(','));
  spentTarget.setValue(spentToday)
 
  return credit;
}

function addToCredit(arr, times, value) {
  while (times--) {
   arr.push(value);
  }
}

So in conclusion, Spreadsheets are super powerful tool and I encourage you to try use them when you want to display your information and the manipulate it easily. It might even be a good introduction for programming, I reckon. Scripting for spreadsheets, at least the way I see it mean going beyond the normal capabilities of the application, but that’s what so great about it – that you can add that missing functionality that will help you make you data useful for you.

Finally, for other aspects of spreadsheets you might be interested in Matt Parker’s comedy routine about spreadsheets.

Advertisements

Scss and Less are two CSS preprocessors. It means that I can write a slightly smarter pseudo-css code that can be compiled to a normal css which the client gets without ever knowing it was written elsewhere. I previously wrote about CSS more extensively, but there’s one more thing worth mentioning – sass and scss have the same features, only sass is written in Yaml format, while scss are written in pseudo- normal css format. As I prefer to keep things close to the original I used scss.

Generally scss is considered more popular than less but feature-wise, they’re pretty much alike. The reason I moved was very simple: in order to compile scss I used the only scss-compiler I could find for node.js – node-sass, which is actually a wrapper to c-written library. but by using this node module my project got a vulnerability warning from GitHub –

Screenshot of vulnerability warning concerning hapijs/hoek by GitHub

Yes, the warning isn’t about node-sass, and to be honest, not even about its dependencies, but somewhere within the 283 (!) items in its dependency tree (in comparison, less has 67 items) there’s this helper/utility/shortcut/who-cares module that produces issues. And the thing that really triggered me was when I was trying to find a solution to this I bumped into “GitHub is wrong, nothing we can do” and that infuriated me.

I don’t care who is wrong, I don’t even care if this vulnerability is real or not. I care about the long line of people responsible that I won’t have vulnerability warning in my project and if all them failed to take responsibility and fix that problem (by pleading to GitHub as far as I’m concerned) – then I will take responsibility and stop using scss. There, problem solved.

It’s worth listening to Ryan Dahl, the guy who came up with Node.js (and now working to create new mistakes in his new node.js alternative). One of the things he regrets is node_modules and what it has become. Another thing he regrets is not sandboxing the modules from access the file system and the network (unless they get a permission of course). These two regrets in conjunction leads to horror stories where some meaningless module down the line accidentally adds a trojan horse (or worse – it’s being tempered by a malicious third party) that hacks and harvest data from my application but when my customers will sue, they will sue me.

So my immediate advice is to take any warning seriously and be very wary on the node modules you add to your projects. In the longer run, I hope that future versions of node will sandbox modules by default and my package.json will specify per module whether it get access to the file-system and the network. I also hope that the npm and yarn companies (yes, npm is foremost a company, while yarn belongs to facebook) will take responsibility on their libraries and will certify that modules are safe to use. They can charge the module developers for the certificate or they can charge the module-users, it’s doesn’t really matter, just as long as this certificates means that this code is safe to use.

Finally, as small bonus, here are the CSS preprocessing features and why I don’t use some of them:

  • Variables – use them. 👍
  • Nesting – nested css classes are bad for performance and are a pain to override. I would avoid nesting to make it a hassle to over-nest unnecessarily. 👎
  • Import – preprocessor concatenate imported files into a single file (and therefore a single http request). That’s great but can also be handled by your bundler as far as I’m concerned.👍
  • Mixins – are used to group properties together. The common example is group vendor-prefix but in reality the preprocessor can do it for you automatically (with a plugin probably). The less format for mixins is reasonable. 👍
  • Extends – are used to “borrow” properties from another class. Essentially it’s a different format for mixins. for less I would avoid it (for scss it’s the other way around – I would avoid mixins but use extends – keeping closer to the real css format). 👎
  • Operations – essentially doing manual calculations for you (such as sqrt). as this is a one time calculation, I’d avoid it.👎

Sometimes it feels like the software development community doesn’t have anything better to do than argue about stupid things like tabs vs space. I mean, this is an incredibly meaningless issue in regard to its impact on the end-user – why does anyone even talk about it?

That said, I have an issue with the current path some people are trying to take CSS. namely – functional CSS. A quick introduction: We have Cascading Style Sheets since 1996 and it’s one of the technologies that helped the internet revolutionise our world. Essentially, it’s a list of styling properties (mostly sizes, colours and spaces) and different ways to apply them. Before CSS you’d write your style as part of the html –

<b>this is bold</b>

But there’s a limit to how many html tags we can have (is there?) so they came up with a more expressive syntax:

<span style="font-weight:bold">this is bold</span>

This now considered bad practice (TIP 1: don’t write inline styling that cannot be overridden) because if you have hundreds of <li> you’ll have to apply the style to each and every one of them, and then making changes become a hell. so Instead you’ll have:

<head>
  <style>
    span {
      font-weight: bold
    }
  </style>
</head>
<body>
  <span>this is bold</span>
</body>

 

This, too, now considered bad practice (TIP 2: don’t apply style on tags, you don’t know where else they’ll be used) as maybe we want to apply our styling to only some of the span elements, so we’d have classes:

<head>
  <style>
    .bold {
      font-weight: bold
    }
  </style>
</head>
<body>
  <span class="bold">this is bold</span>
</body>

In the evolutionary story I’m trying to tell this is a critical junction, because the naming of that class is critical – should I name css based on what they do (“make the text bold”) or based on where they’re being used (“emphasised text”).  I’ll continue with the story and get back to it later. The next step was the brilliancy of moving the styling to a separate file that I can apply to other HTML pages that share the same styling:

branding.css:
.bold {
  font-weight: bold
}

index.html:
<head>
  <link href="assets/css/branding.css" rel="stylesheet" />
</head>
<body>
  <span class="bold">this is bold</span>
</body>

And along came the CSS preprocessors (such as less and sass) and they came to help resolve a critical issue: let’s say I have my special red colour #f45042 sprawled all over my css file. And it might change, right? so instead of going through all the instances and fix them, I’ll set a pre-processed variable $clr-red that is used everywhere and when it’s time to change, I’ll only do it only once. Why should it be pre-processed and not, say, done using CSS variables? Because it’s not going to change on real-time and there’s no reason to burden the client-side to do a computation that we can do once for everyone anyhow.

And then came functional css and threw us back twenty years. It came from the line of thinking of React‘s approach to HTML (where HTML should be an embedded part of the javascript code) and it would look like this:

<span class="b">this is bold</span>

Essentially, it means that we lost the entire world of cascading and context-related-meaningful class name; so let’s go back to our critical junction in the story and the meaning of the css class names. Why shouldn’t we name the classes on what they do? because it leads to redundancy – it forces you to create and later maintain classes that you might need as oppose to have just classes that are relevant to your code. functional-css solves that problem by adding another layer of complexity (of maintaining the compiler code and to keep track of its api on top of normal css). Additionally, function-css will always chase real-css’ tail in terms of adopting features (such as animation?). If we agreed that the names should describe where to use the class and not what it does, we thus make functional css irrelevant.

CSS isn’t perfect. it could’ve been much friendlier if it follow JSON format, but it’s not. Maybe CSS4 will, but for now please stop making it something that it’s not.

And finally, few more tips –

  • Don’t use element IDs (eg. #search { ... }) as it’s a pain to override.
  • Whether you use BEM or not, class name should describe where they should be used (e.g. .button-primary)
  • Some of your css class will be commonly used (such as .button) and it’s perfectly fine that some will be used only once (e.g..about-team-image). However, the two kind should be split two kinds into different files as the common.css file could later be exported to other projects.

More on the history of css can be found on css-tricks.com.

I was recently tasked to setup a server that had 3 auxiliary servers: Elasticsearch, Redis and mysql. The documentation included lengthy instructions on how to install all three on linux but discarded the facts that (a) I have a 2013 mac; and (b) I already had an incompatible mysql server running on my system, which means that it took me time realise that it’s incompatible and then remove it and then reinstall several versions until I got it to work
TIP 1: The mac mysql servers, which integrates with the mac-settings and you can start and shut down with a click of a button, or even the ones that comes with MAMP – they’re great and really super easy to use. But sometimes you need the command-line-interface mysql that comes from Brew. it’s a hassle but it’s life. removing a mysql server can be a pain.
TIP 2: make sure you’re running the right server. I’ve wasted too much time banging my head on the wall wondering why my changes don’t do anything, only to realize I had another server running and blocking the ports from the current instance.
So it took me a day to get everything running and I was left with the thought: really? must it be such a hassle? and worse – must it be a different kind of hassle for any operating system?
Along comes docker, let’s do a quick introduction:
let’s say Adam wants to publish his website. He can either have his own dedicated machine, which will cost him a fortune and a headache of maintenance or he can rent a server in a server-farm.
Now let’s say Brenda has a server-farm. She can either give Adam his own dedicated machine or she can host him on the same server running her website. those website don’t require to do any heavy lifting anyhow. But if her website crashes, she better make sure Adam’s website won’t go down with it because it would mean her service is unreliable. And this is where Virtual Machines comes in: Each virtual machine runs in its own sandbox so whenever something bad happens inside of it – it remains there and not affect anything that co-exist on the same machine. Dockers are the same idea, only instead of running a complete operating system with all features and functionalities – it runs just what you need (for example: an apache server, or a mysql server).

We use Docker to encapsulate a cumbersome setup into a single execution command

So let’s say Adam wants to have a mysql server in his website. He can simply download an a docker image (a preinstalled copy) of a mysql sever by running the command
docker run --name adam-mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql:tag
# --name the container name
# -e set up environment variable within the container
# -d which image to use
Docker will download the image, set it up with root password and run it inside a docker container. Container is a running instance of images – so images are the read-only template, while container are the actual document that you’re working on, whether you save your work at the end of the day or not is up to you.
if you want to save your work, you need to use docker volumes. essentially they mount files from outside the temporary container so changes made are actually saved somewhere accessible.
So once we have docker run we only need to run this one command with lots of parameters and hassle solved, right? I wasn’t impressed. Along comes docker-compose.
Docker-compose means declaring all the services, with their parameter into a single file and instantiate it all simple by running docker-compose up. By default docker-compose will pick up our docker-compose.yml which sits in the current folder. Let’s have a look at my file and we’ll break it down:
version: "3.5"
services:
  elasticsearch:
   container_name: adam-elasticsearch
   image: "elasticsearch:1.5.2"
   ports:
   - "9200:9200"
   - "9300:9300"
   environment:
   - "discovery.type=single-node"
  
redis:
    container_name:adam-redis
    image: "redis"
    ports:
    - "6379:6379"
    volumes:
    - ./config/redis.conf:/redis.conf
    command: ["redis-server", "/redis.conf"]
  
  mysql:
    container_name:adam-mysql
    image: mysql/mysql-server:5.7
    ports:
    - "3306:3306"
    - "33060:3306"
    environment:
      MYSQL_USER: root
      MYSQL_PASSWORD: "${ROOT_MYSQL_PASSWORD}"
      MYSQL_ROOT_PASSWORD: "${ROOT_MYSQL_PASSWORD}"
    volumes:
    - ./config/my.cnf:/etc/mysql/my.cnf
    - /tmp:/tmp
    - my-datavolume:/var/lib/mysql
    - ./config/mysql-init:/docker-entrypoint-initdb.d
volumes:
  my-datavolume:
ElasticSearch was quite straightforward: we name our container; we use an image of the right version; we pass an environment variable and we expose two ports to the world outside the container so we can access our server.
Redis was a bit more tricky because it limits its incoming IPs. It can easily be resolved by adding “bind 0.0.0.0” in its redis.conf (which means it’ll answer all IPs), but we need to mount our configuration into our container using the volumes setting and then start the Redis server with the configuration file as an argument (the CMD line above).
MySQL was quite challenging as we not only need to bind the address to 0.0.0.0 in the my.cnf file like redis – we also need to create a user that has access and permissions from outside the container. that’s why we would set the environment variable MYSQL_USER – note this is not root@localhost rather than root@%, who isn’t bound to specific server. However, he’s still not a super user, but we can fix it using the init.sql which is placed inside ./config/mysql-init (since we mounted it to /docker-entrypoint-initdb.d docker will automatically run all the scripts in it alphabetically):
USE mysql;
GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' WITH GRANT OPTION;
FLUSH PRIVILEGES;

Your don’t have to write the password directly in docker-compose.yml. by using ${VAR} you can set them in your environment variable or your local .envfolder

 TIP 3: Creating a hidden “.env” file will let you set up a local-scope environment variables files. super useful. Oh, and CMD + Shift + “.” will show you hidden files in osx.
And lastly, in order not to lose our data whenever we stop and start our service, we’ll mount a volume (“my-datavolume” in the example) that will be managed by docker
TIP 4: if mysql complains about sockets, very likely the server isn’t really running;
error 1130 means the user doesn’t permissions (the init.sql should’ve fixed that);
error 1045 is wrong password. complicated passwords are great you’re going to send yourself into hell if your script fails to read it properly (i.e. don’t use quotes surrounding or within your password).
If your server refuses to connect to mysql check if it’s actually running by
  • Trying to connect to it directly (mysql -h127.0.0.1 -uroot -p);
  • Or via the docker (`docker exec adam-mysql mysql -h127.0.0.1 -uroot -p`)
    • Note that this is the internal root@localhost
    • Check `SHOW GLOBAL VARIABLES LIKE 'bind_address'; and verify it’s actually 0.0.0.0
    • `SELECT host, user FROM mysql.user and verify you have properly set root@%
The world of dockers doesn’t end here. Two possible direction that we can take is to create an image for our own application, user Dockerfile or running multiple containers of the same image using clustering but that’s for a different time…

Once upon a time, in a forgotten little green valley there was a small kingdom, ruled by an old and wise king. The people of the kingdom loved their king and praised him constantly, albeit the heavy taxes that burdened their life and debts that enslaved them and their children for all ages.

And then one day, a cry was called, challenging the king’s authority. Nobody expected that. Especially the king, let alone his unscrupulous tax collectors who feared that any threat to the kingdom is a direct threat to their livelihood. But the biggest shock came when it turns out the cry came from none other than the king’s very own beloved son. The people of the kingdom were baffled at the meaning of this as the king loved his son and son loved his king in return; but the cry was clear as the lake’s face at the early mornings – The son promised all those who wish to pledge their allegiance to him a relief from the debt owed to his father. Freedom, at the price of servitude.

The king’s council didn’t know what to make of this and what shall be done with the disobedient prince and his impious followers and as the king kept his silent, as he always did, they’ve made the best decision that they could: “The prince will be banished, yet his ruling will preside”. At first, his devout followers were punished severely but as time went and as their numbers grew, it became less and less horrific. It seemed that the king never vindicated or rebutted his son, although those that are debating about this still would claim otherwise. A few even suggested that the debts were meaningless if they can never be repaid, but none of this really matter as this happened long time ago in a place very far from here.

Debt slavery (or “debt bondage”), however is something that still exists in our world today. It is a person’s pledge of labour or services as security for the repayment for a debt or other obligation, where there is no hope of actually repaying the debt. The services required to repay the debt may be undefined, and the services’ duration may be undefined. Debt bondage can be passed on from generation to generation. Today, debt bondage is the most common method of enslavement with an estimated 8.1 million people bonded to labour illegally as cited by the International Labour Organisation in 2005 (source: wikipedia). Debt bondage has been described by the United Nations as a form of “modern day slavery” and the Supplementary Convention on the Abolition of Slavery seeks to abolish the practice. To lean more about modern day slavery, visit the anti-slavery website.

So Microsoft purchased Github, and everyone are fussing whether it’s a good thing or a necessary evil. Github, which is a company like any other company has 3 major benefits

  1. Its name makes it associate with the tool Git, thus making the default go-to online repository.
  2. It was the first of its kind
  3. It was widely adopted by the open-source community.

Microsoft, which is a monstrous corporation like any other has its own quirks and perks:

  1. It’s responsible for Microsoft Windows; and office
  2. It promotes its own dev-technologies like c# and typescript.
  3. But it also did good things like X-Box, VSCode and MS-Paint.
  4. The have defiled Skype.

Will Github share the same fate as Skype? not necessarily, it’s important to remember that it wasn’t Microsoft that killed hotmail, it was Gmail, and that not all acquisition instantly kill a product (i.e. Youtube).

So what might go bad? Meta-data regarding developers’ activities will sip through to Cambridge Analytica; Microsoft technologies will be pushed down our throats; The service will deteriorate and eventually die. What might go right? The service will not die out of bleeding money.

At the end of the day, there are alternatives to github so I don’t feel coerced yet to break down into sobs. Not just yet, at least.

A URL (stands for Uniform Resource Locator, but surely you knew that) consists multiple components scheme:[//[user:password@]host[:port]][/]path[?query][#fragment] (for example – https://admin:1235@127.0.0.1:4200/languages/javascript.html?lang=en#es6).

There’s an interesting history to the evolution of URLs but I would like us to focus on the path part though as the word itself implies a certain meaning – the path represents the location of the file we are now browsing, but what about single-page-application? it’s the same page giving different values as we use it. So we can either ignore these values (and keep the URL static) or we can update the query part of the URL (or a update the path, which is sort of lie, but we’ll get back to it)

Updating the query meaning exposing something to the user that he doesn’t care to see. For her there, is no difference between path and query, she’s only interested in the results. and Should the user care that the file we’re serving her is an html file or a doc file? not really…

So the URL can represent a location, but it can also represent a state:  if we imagine our single-page-application as a finite state machine let’s say “start”, “playing”, “game-over”. the URL can indicate in which state we are. It actually makes a lot of sense as it means I can copy-paste and send this session-state to someone else to pick up where I left off, for example. Although in all fairness, states are usually not really transient – user-specific information is usually not part of the state (but it should be) or the state simply contains too much information to be described in a single line. So we can say that the URL contains the user-session ID, pasting it to a different machine (and having the same user credentials) I should be able to continue my work, that can be pretty slick. How many times did a friend tried to send you specific search-results and all you got was the empty search page? this is something can quite useful…

Another meaning we can give to URL would be of actions. Imagine you can you can tell a web-application what to do, like a CLI (command line interface), for example gmail.com mail friend@somewhere.com "great idea!" "hey, loved the new designed" and that’s it, email send without the need to actually visit the website. It’s a bit of a superuser-hack as normal users won’t bother themselves learning each website internal language.

Or maybe the URL should be completely hidden from the user, which is pretty much how Safari treats its URLs. Imagine every online element has its own unique ID, a crazy 32-byte string representing it that has no decipherable meaning, but computer can decode it to find the exact computer and command they need in order to retrieve the right content. Nowadays, magazine ads that provide online content will refer their readers to google or facebook their product, or provide a QR code, instead of giving a complicated URL.

Personally, I prefer to have my URLs meaningful, whether as a state-representation or an action, but that’s just the geeky superuser part of me and at the bottom-line the average user doesn’t and shouldn’t care about the URL.