Frontend engineering over the horizon

What direction will front-end engineering on the web go? How will applications be built and delivered in the near future?

The web started off as a way to link documents together, as an implementation of hypertext. Hypertext originated as an idea for how to help humans organize their thoughts as outlined in As We May Think, back in 1945. But it didn’t stay just a way of organizing human thought.

In May 1995, Ben Slivka at Microsoft was working on Internet Explorer, and in his memo, The Web is the Next Platform, he correctly saw the internet as an application delivery platform. It didn’t take much longer for Bill Gates to come around.

Once we got use to that idea, a slew of web 2.0 applications appeared in the mid-2000s, such as Del.icio.us, Flickr, and WordPress. The application was delivered on every request, due to the stateless nature of the web. That’s why when you used the web back then, the entire page would flicker, refresh, and reload.

Then around 2005, web devs started leveraging a then-little-known piece of technology called XmlHttpRequest from the mid-90’s to dynamically update a web page. Coined as AJAX, it was a way to request a partial piece of data from the server, and update only part of the web page. The application of this technology was its most stark was when Google Maps first came out as competition against the then-dominate Mapquest. Mapquest required users to click the thin blue bar to move in the ordinal directions to see more of the map.

By contrast, Google Maps lets you drag the map around to navigate, and it would dynamically load parts of the map coming into view. I remember being blown away when I first used Google Maps.

It was also around this time that John Resig started working on jQuery. Frontend engineering started to diverge from backend from this point. It’s never looked back in an explosion of front-end frameworks, from Backbone, Knockout, and Ember to React, Vue.js, Svelte, and Next.js.

Frontend devs are always anxious about whether they’re hitched on the right wagon. I don’t have a crystal ball, but I see a couple interesting things on the horizon.

First on my radar is Deno, a program to run javascript programs–a runtime. Typically, we use the browser to download and run javascript programs every time we visit a URL. Deno strips out all the other things that a browser does, and just focuses on the execution paradigm of a browser: executing javascript programs downloaded from the web in a sandboxed environment.

Deno will always be distributed as a single executable. Given a URL to a Deno program, it is runnable with nothing more than the ~15 megabyte zipped executable. Deno explicitly takes on the role of both runtime and package manager. It uses a standard browser-compatible protocol for loading modules: URLs.Introduction to Deno

Therefore, you can still deliver web apps, but through a 15 MB executable instead of through a multi-gigabyte browser. It frees us from relying on NPM, a private company, for our public developer libraries. Taken a step further, you could conceivably deploy javascript libraries on IPFS or Filecoin, to ensure the it’s always available without relying on a private company. Deno has the makings of changing how applications are distributed.

Second on my radar is Webassembly (WASM). Most web applications are not compiled. They’re interpreted and executed on the fly as they’re downloaded and read by the browser. That’s why Javascript dominates web application frontends–browsers only interpret and execute javascript.

But what if the browser had a common binary format as a compile target? That means you can now leverage any programming language and deploy it to the web, and browsers could run the application.

Right now, I see the most activity with WASM in the Rust programming language ecosystem. Rust programmers are doing some amazing things with WASM. Makepad is an IDE that blew me away the same way Google Maps blew me away 15 years ago. First, it supports live-code editing, where changes to the code changes are immediately reflected in the program. Second, the UI is butter smooth. Try hitting the “Alt/Option” button and see the details of the code shrink so you get a sense of the overall code structure. Lastly, it supports a VR mode, so you can code collaboratively with other people with presence.

Rik Arends, the person behind Makepad had this to say:

The reason game-engines look so much faster than web-browsers is because the W3C specs force browser implementations to be slow. Covering the edgecases quickly gums up any attempt to be fast. In a game engine you simply don’t do the expensive thing because its slow.https://twitter.com/rikarends/status/1327190734106144768

Which transitions to the last thing on my radar: Godot, a game engine/IDE that can compile to WASM. Normally, game devs and web devs are worlds apart. The tools, ecosystem, target customers are all different, so there’s not much cross-pollination. However, now that WASM is a compile target, people have started experimenting with building entire web applications in Godot and deploy it as WASM.

With the success of applications like Figma, I think we’ll see more and more web applications that leverage WASM to do what we didn’t think was possible before, and none of this will be done with the current frontend stack. Whether it will be done with Godot or something else remains to be seen.

A common theme I see here is the disintermediation of parts of the browser, in order to change how distribution works. With a change in distribution comes a change in possible business models. Will React and other frontend frameworks go away? No. For many applications, it will be good enough. But for any web application that have harder demands to wow users and create unique experiences will increasingly be done through WASM.

What if apps had ‘hours of operation’ like retail stores?

Having hours of operations really works well to concentrate attention or liquidity when you have scarce amounts of either. Trading platforms seem to benefit from this, as trade volume jumps when the bell rings on Monday.
 
You can also concentrate the number of channels. A mistake I see early forums or any community space makes is by making too many channels at first. With every channel, you dilute the rate of conversation, making it hard to bootstrap the community.
 
Another variant is to concentrate inventory instead of time. It’s a technique used to great effect by Groupon in the early days, when they only had one deal, rather than lots.

What is the equivalent of home appliances?

Credit didn’t use to be available to the middle class in America. It was for businesses and/or the wealthy. Consumerism didn’t start until the 1920’s, when technological advances made better time-saving devices that was priced just out of reach of individuals.

Smelling a need, financial institutions expanded credit to be available to the average middle class American. For the first time, instead of waiting months/years to save up for a new car / fridge / radio, they could owe it immediately and pay for it over time.

When there’s borrowing, there’s KYC for lending, and credit scoring. After all, banks wanted to be sure the borrower could pay back the loan.

In fact, without the appearance of home appliances, there likely wouldn’t have been lines of credit or credit cards as we know it.

Currently, Defi (decentralized finance) is a growing niche within cryptocurrencies where users borrow and lend money through smart contracts. It’s gotten a bunch of traction and attention. There are currently $466 million of ETH locked up in MakerDao. Check out the rates of other Defi on Loanscan.

Currently, all borrowing is over-collateralized, meaning you can’t borrow more than you’ve deposited. That’s because without credit scoring or KYC, lenders wouldn’t be assured borrowers would pay back the money.

There are people working on KYC and identity in the space. And we know much of the activity right now is trading on speculation. But a second order question is probably more important than how to solve identity on any blockchain:

“What’s the equivalent of the home appliances for cryptocurrencies? What do people want/need to buy that they need to borrow cryptocurrencies for?”

The situation that lead to rampant consumerism in the 1920’s was unique to its time. What is the situation now that’s unique to this time that people will want to borrow money for?

Without a clear answer to this question, it doesn’t yet make sense to solve identity for under-collateralized borrowing and lending. And if you make a Defi lending product, a clear answer will help light your way.

Datalog is a thread to pull on

Ever since Eve, I meant to look into Datalog. Peter Alvaro’s @strangeloop_stl talk also piqued my curiosity. He used Datalog as a restricted language to build concurrent programs.

So what is Datalog? It’s an old language that’s more powerful than relational algebra, because it has recursion. But it’s less powerful than Prolog, because there’s no negation, and the ability to express algos beyond polynomial time. It’s not a Turing complete language.

Why use such a restrictive language? In Peter Alvaro’s talk, he uses the self-imposed constraints to avoid certain classes of programming mistakes in concurrent programs.

I found these two very helpful for beginners (prereq is that you kinda know what logic programming is and can read haskell). This gives you an intro to datalog.

And if you want to build your own naive compiler and evaluator of datalog, that will take you along. And if you’re curious about the Eve implementation in Rust for v0.4, check it out.

Logic programming boils down to essentially a graph search in the possible space. All the details is about how to do the graph search effectively.

Noticed stock gift cards. What about crypto gift cards?

A company called stockpile is selling stocks in the form of gift cards at Target. https://twitter.com/patio11/status/1140119792596033537

Putting it in gift card form with a bit of design (good UX oft missing from traditional finance) seems to make it accessible to kids and lower middle classes. What seems clever about this? It seems like the play here is to find a new distribution channel (retail) to find a new market.

According to the tweet thread, you pay a $5 commission on $20 trade for the plastic gift cards. That’s highway robbery, but you’re paying for the education.

They mostly provide stocks, but also some ETFs. There is a single exposure to Bitcoin through Grayscale’s Bitcoin Trust (search GBTC) https://www.stockpile.com/gift/basket/choose_stock

That should open up markets to people who don’t need to know anything about crypto, and it’s another onboarding ramp.

Taking it further—Technically, you should be able to create new financial instruments with smart contracts that you can sell at Target in the form of a card. The private keys for an address is generated and held in the card itself.

Operationally, I can see how it hard it would be.

1/ Making gift cards should probably not be too hard. But secure cards might take some engineering. Are there off the shelf chip cards you can flash software onto? Seems unlikely for security reasons.

2/ Most big retailers (Target, Costco, Best Buy) are not going to want to carry crypto-based products out of FUD. Maybe Walmart would, as they have a Labs division that’s already looking into blockchain for supply chain.

3/ Even if they do, you’d have to spend $ on making end cap displays. Also, because they own distribution, they can make more onerous demands, like your product has to sell well or they won’t give you good shelf space.

Or for the remaining inventory, you have to promise to buy it back. They have high costs too, such as making advertising flyers and training employees on these new products.

4/ Regulatory restrictions probably would require KYC for these gift card crypto.

Is there a second order effect that makes this an interesting business or technical breakthrough if this works?

I currently don’t see either, but the cultural implications might be interesting. Once people figure out these things have value, they can do a bunch of OTC trades on the streets for any type of behavior normally deemed unacceptable.

Even if regulations require KYC to move the money, having a physical representation of the money is enough for people to trade it around, without having to actually move it. It’s much like how the banks don’t actually move the stock certificates around, but just change a ledger on who owns it at the end of every day.

What is it like to be rejected by Y Combinator?

Originally posted on Quora on Sep 19th, 2014.

I applied six times between 2005 and 2010, before getting in the seventh. Someone on Hacker News once called it winning the Oscar of Rejections. I can’t say I’m particularly proud of that distinction, as I’d rather be successful. That said, the lessons are more apparent when you fail, compared to when you succeed. I wrote about it in the form of advice for people applying to YC.

So as for what it feels like. The first couple of times, it felt like disappointment and a huge letdown. At the time, I wasn’t sure what they were even looking for, but maybe we got it. So after sending the application in, it had all the tension and melodrama of gambling: the ball is spinning around the roulette wheel waiting to land on your number.

After a while, I started to figure out what they were looking for, and subsequent rejections were not disappointing, but felt expected. When I finally got it, it was a little unsurprising. We have done more to prepare and shore up our chances much more than any other time–though it was no guarantee.

Nowadays, I feel if you’ve been rejected by anything, all you do is keep working. Sometimes, it’s a blessing in disguise. If you’re not at the right stage in your own development as an entrepreneur, even if you get in, you won’t be able to properly leverage YC to build a successful company. Other times, you’re overlooked, but don’t take it personally. Mistakes happen. Keep on working. Get to be so good they can’t ignore you.

I read somewhere about how it feels for athletes to win a championship game, rather than feeling like a fluke of high performance, like it seems in the movies, it feels rather like an execution of a habit. That’s probably what it should feel like.

Objections to Blockchains remind me of objections to Git

On the HN thread.

It’s just really hard for people to see beyond the present. Innovators in one cycle are often blind to the opportunities in the next. I see it in a couple of my successful friends that built their own companies on the web and mobile when it comes to crypto.

I’m reminded of the quote about the radio:

“The wireless music box has no imaginable commercial value. Who would pay for a message sent to no one in particular?” — Associates of David Sarnoff responding to the latter’s call for investment in the radio in 1921.

And then a couple of years later, for someone that made their stake in radio:

“While theoretically and technically television may be feasible, commercially and financially it is an impossibility, a development of which we need waste little time dreaming.” — Lee DeForest, American radio pioneer and inventor of the vacuum tube, 1926

And remember TV didn’t really come into its own until decades after 1926.

Yesterday, I was reading about past objections (circa 2007) to using Git vs CVS or SVN, and most of the objections were things that turned out to be unimportant. The objections focused on how their team doesn’t have a decentralized workflow that the Linux Kernel does, so they don’t need git. Or that the interface sucked, so they don’t need git.

Turns out everyone mostly uses git in a centralized way (via Github), to coordinate, but the decentralized design reveals secondary affordances that are really useful in practice. The decentralized design allows you to work and commit offline–great as laptops became more pervasive and more people worked on them. (You use to have to be online to commit, and can sometimes take minutes) The decentralized design also used content hashing with parent chains, which allows for inherent self-checking–no object modification can escape detection. Git’s data structure makes commits fast, which in practice makes fine-grained commits possible. And the decentralized design gives everyone a backup of the repo, so the central repo getting nuked by the intern doesn’t take down the org. Nowadays, in saying these things, I’m preaching to the choir. But back then, it was completely non-obvious.

The only people that got it right back then are the ones that dug into the technical details, and then actually tried it out. Notably, the guy at X.org.

And I think it’s a similar thing with cryptocurrencies, in the way that the fundamental different design and architecture of blockchains allows for secondary effects that are harder to see (until you’ve actually used or programmed in it) that aren’t available to you at all otherwise.

The trap most people fall into is that they compare the new X with what the old Y does very well. But they never consider that if it’s actually got a new underlying concept, it might suck at what the old Y has had time to get good at, but it allows for some other thing that we just couldn’t do at all before.

 

Marginal Cost vs Full Cost

Fallacy of marginal cost when making decisions only applies when a fundamental shift has occurred. If no shift has occurred, making decisions solely on the comparison of marginal cost works.

That means when it comes to leveraging personal skills and connections, you should double down on what you’re good at if something fundamental about the environment hasn’t changed, whether this environment is distribution, relationships, technology, culture. But when it has changed, you need to take a deep breath and fully retool, and allow yourself the time and the mindset of a beginner in order to make the transition. That also means you need to save the money and create the environment so that you can do so.

How to tell if something fundamental about the environment has changed? It helps when you can extract the concept of something from the concrete something. For example, lots of front end frameworks proliferated in 2013, so lots of people felt like they couldn’t keep up. So they threw up their hands and would joke only the cool kids knew something new. The only way they could judge something was if it was new. You should only learn something if the underlying concept is something new to you. Concepts are an abstraction or generalization of how something works.

That’s why learning about something’s concept will go a long way. Knowing the how’s and whys of something, and finding a generalize-able abstraction will help you recognize shifts, so you can better make decisions in a changing world.

Working on a Personal Relationship Manager

I got asked if I wanted to work on a Personal Relationship Manger. I politely declined, and wrote the reasons why:

On the technical side

if you’re going to be writing something to help do relationship management, it’d better be really constrained and focused in scope. Because we often use umbrella terms to mean different, but related things, it’s really easy to build assumptions into the software that doesn’t serve large segments of your users. For example, we built Noteleaf to cover ‘meetings’, but the workflow we had only meant to cover ‘coffee meetings between founders’. We had all sorts of people sign up for it that we weren’t built for, since they saw ‘meetings’, and thought it applied to them. As a result, we got a hodge-podge of user feedback that we couldn’t decipher at the time, because it was actually different segments of users having completely different needs and jobs to be done. That’s not to say it can’t be done in the near future. DNNs may be able to handle all the different variations of ‘meetings’, but that’s mostly just hand-waving on my part, than anything concrete.

On the business side

Most people don’t care enough about managing their personal relationships to pay for it, and hence, it’ll be really difficult to build a business on top of it. That’s why most relationship software are CRMs, as businesses care alot about knowing who their customers are and what they’ll buy. On the flip side, the closest we come to a personal relationship manager is facebook and other social networks. And even then, it’s not so much about the management (managing is work…no one want to do work), but getting news (pictures and text) about their friends. It seems like there’s a new social network about every 7 years or so. AOL IM ~1997, FB 2004, Instagram 2010/Snapchat 2011. 2018 might be ripe for another one. I’m not sure if it’s because teenagers don’t like being where the adults are or what, but that seems to happen. If you want to work on something related to personal relationship manager, I’d work on a new email (search for DotMail). People get tons of email and really hate their email client, and are excited for something that works better. You should really understand why people use email (i.e. people use it as a todo list rather than point to point messaging, or people use it to transfer files between their devices). before attempting it. However, be forewarned that email protocols are old and shitty to work with. It’s a lot of work to get something basic up and running.

How to take apart the Panasonic Toughbook CF-W2

From the archives. Good thing for the Wayback Machine.

1. 1 screw inside the CD/RW unit underneath the cover on left. Do this before you power down the unit. Otherwise you will have to force open the drive from the bottom.
2. 2 nut screws on the VGA connector on the top left side.
3. Take the plastic cover off the Wireless antenna near the Intel logo on the right side of the unit by prying the cover loose. Remove 2 screws at side of antenna.
4. Turn the notebook over.
5. 7 screws at back. Make a note of the screw positions in a piece of paper, since there are 3 long and 4 short screws.
6. 1 screw for the Memory Cover
7. 3 long screws holding the Honeycomb cover, next to the memory slot. These screws hold the keyboard.
8. 1 inside after the Honeycomb cover is removed.
9. Turn the unit over. Open the lid. Push The keyboard up from back under the battery. The top part comes up. Now lift up the keyboard. Pull the ribbon out underneath the TOUGHBOOK Sign. Unclip the ribbon from the unit very carefully.
10. 2 silver screws on inside just above the logo Intel Centrino under the keyboard
11. 6 black screws under the keyboard
12. Now close the lid and turn the unit over.
13. 4 screws holding the LCD Hinges at back (2 Screws each side).
14. Now slowly pull the back cover up.