Cats, taverns and cleaning systems

Tarn Adams:

It’s funny how I have popular bugs, right? You shouldn’t have popular bugs. […] I added taverns to fortress mode, so the dwarves will go to a proper establishment, get mugs, and make orders, and they’ll drink in the mug. And, you know, things happen, mugs get spilled, there’s some alcohol on the ground.

Now, the cats would walk into the taverns, right, and because of the old blood footprint code from, like, eight years ago or something, they would get alcohol on their feet. It was originally so people could pad blood around, but now any liquid, right, so they get alcohol on their feet. And then I wanted to add cleaning stuff so when people were bathing, or I even made eyelids work for no reason, because I do random things sometimes. So cats will lick and clean themselves, and on a lark, when I made them clean themselves I’m like, ‘Well, it’s a cat. When you do lick cleaning, you actually ingest the thing that you’re cleaning off, right? They make hairballs, so they must swallow something, right?' And so the cats, when they cleaned the alcohol off their feet, they all got drunk. Because they were drinking.

But the numbers were off on that. I had never thought about, you know, activating inebriation syndromes back when I was adding the cleaning stuff. I was just like, ‘Well, they ingest it and they get a full dose,’ but a full dose is a whole mug of alcohol for a cat-sized creature, and it does all the blood alcohol size-based calculations, so the cats would get sick and vomit all over the tavern.

The original bug report is, ‘There’s cat vomit all over my tavern, and there’s a few dead cats,’ or whatever, and they’re like, ‘Why? This is broken.’

People helped me with this. We were all looking and figuring out, ‘What the heck is going on here?’, and that was the chain of events. It’s like doing the detective work to figure out that entire chain of events is what happened. You can see how adding just a tavern that gave the opportunity for spilling alcohol, which was really uncommon before, now all the spilled alcohol starts to, form in one location where something could start to happen. You activate bugs and little parts of code from eight, six years ago where you just didn’t balance the numbers because it didn’t matter. […]

PC Gamer: So the cats’ inebriation system was just based on any organism would have the potential to get drunk.

Yeah, right now it’s any creature that has blood, and that includes, like, an octopus. I don’t know if an octopus can get drunk or not.

The way the different systems are all interacting together is so fascinating. Especially when the outcome becomes something like this. :)


I don’t even use version control. If you don’t know what that is then you’re not gonna yell at me. If you even know what version control is you’re gonna be like, ‘You don’t use version control? You don’t use source control? What is wrong with you? How can you even work?’

I'm still baffled to know that many game developers (or other developers) work without version control. It seems so dangerous, but… well.

Delight

Craig Mod in his beautiful essay about the Leica Q:

And what is delight? For me, delight is born from a tool’s intuitiveness. Things just working without much thought or fiddling. Delight is a simple menu system you almost never have to use. Delight is a well-balanced weight on the shoulder, in the hand. Delight is the just-right tension on the aperture ring between stops. Delight is a single battery lasting all day. Delight is being able to knock out a 10,000 iso image and know it'll be usable. Delight is extracting gorgeous details from the cloak of shadows. Delight is firing off a number of shots without having to wait for the buffer to catch up. Delight is constraints, joyfully embraced.

Bonus:

It should not exist. It is one of those unicorn-like consumer products that so nails nearly every aspect of its being — from industrial to software design, from interface to output — that you can’t help but wonder how it clawed its way from the R&D lab. Out of the meetings. Away from the committees. How did it manage to maintain such clarity in its point of view?

And:

The Q — like most recent Leicas — is engraved with the softly geometric, proprietary LG 1050 typeface. It feels so, totally, completely at home, stamped into the camera body in all caps. It's highly legible and precisely designed. Minimal, functional, but with a bit of quirky character. Like the Q itself. This is the perfect camera typeface used in the perfect way. Mic dropped. Case closed.

VR is not the future

This is a prediction. I may (must?) be completely wrong—after all, I hold an unpopular opinion and many tech companies seem to think otherwise.

We'll see. ¯\_(ツ)_/¯

Virtual Reality (or VR) is a fantasy that many geeks love. It's in books, in movies, in games. In our collective mind, virtual reality is like flying cars: it's a dream that we always envisioned for our future as an high tech society.

Well, we all remember that we didn't get flying cars for the new millenium.

Incredible technologies portrayed in science-fiction books or movies are generally not really that relevant in reality. Or they are so hard to build that we can't even fathom to create them now. In our history, the way our society evolved was often very different to how we imagined it in our fictions. When you confront with the reality, many dreams are just what they are: dreams. I would need to dig up some sociology courses that I had—but I remember that this is a recurrent theme in our evolution.

For VR, it's a bit different: we (almost) have the technology. However, I do think that it's just a fad. A fad that excites many marketers or entrepreneurs. Real-world applications are so narrow.


VR is a cool demo. VR is impressive. Even with our primitive VR technologies, it's still incredible to try.1

VR is also making people sick—yes, many people suffer from motion sickness.

VR is pretty boring for games, its main application right now. Yes, the immersion is better. Nop, the controls are worse. You can't move in a game without hitting a wall. You can't use your hands without complex gloves or other utensils. The possibility are not that huge: okay, it might be great for some first-person games where you don't move. But that's just a small subset of all the games we can create.

VR is ugly. VR is cumbersome. VR is like Google Glass (remember that?) in so many ways.

The Future?

The future? Really?

If you're at home, it might be fine, although that wearing another thing on your head is not that comfy. I already wear glasses, I really don't want to add something more on my head, I assure you. Especially if it means that I might hit a wall someway.2

Outside? At work? We are going to need many years to be okay with that.

The day we'll be able to have VR headsets the size of a regular pair of glasses is far. We might be already destroying our eyes by viewing screens all day—I can't wait to wear VR headsets a few centimeters from my retinas all the time!


VR is not completely useless. There's some fields where virtual reality might be useful. Want to visit a new flat in a town far far away? You're a future surgeon and you need to practice during your school years? There's a myriad of applications outside of gaming and home uses.

Unfortunately, right now, people imagine that VR will be used to handle and manipulate softwares, computers or playing games. That's absurd.

Seriously? Who would do that? What the point? Who in their right mind thought it was a good idea!?

When I say that VR is not the future, I don't mean that VR will disappear completely. But we need to take a step back and really think about it. We need to stop to try to put VR everywhere—it's just a waste of time.

The only application that will make VR a general use product might be porn. Honestly, except for that, I don't see a future where VR is ubiquitous. That just doesn't make any sense.


Virtual reality is a fad. And right now, journalists and tech companies are all buzzing and craving for that. When the first officially released headsets will be available, I expect a massive flop. As soon as the novelty wears off, people will just forget it. Because outside of some narrow use cases, it's just not practical or invisible enough.

F*** it, I give up.


  1. VR is the new AR? I'm still inclined to think there's a slight chance that AR might come back in a way or another and be pertinent. 

  2. It's like 3D glasses in theaters, but worse. And 3D glasses are already an aberration, we all agree, right? 

Why do many software engineers hate Java?

Michael O. Church:

First, let's cover the technical issues. It's verbose, combines the worst of both worlds between static and dynamic typing by having a hobbled but extremely clunky type system, and mandates running on a virtual machine (VM) that has a macroscopic startup time (not an issue for long-running servers, but painful for command-line applications). […]

The VM itself has a lot to recommend it. It offers concurrency and garbage collection at a level of quality that, until recently, wasn't found anywhere else. […] Much important software in the early 2000s was written in Java because, at the time, it was the best choice, even taking the mediocrity of the language itself into account. It had Unicode (albeit, UTF-16) from the start and a strong concurrency story, and it was a notch above C++ in terms of user experience. […]

If you put Java on a technical trial, it doesn't do so bad. The language sucks, the platform is pretty good for most purposes. I do hate the dominant interpretation of "object-oriented programming" with a passion, because it objectively sucks. […]

So let's talk about the political and cultural issues. First, the dominant Java culture is one of mediocrity and bad taste, with MetaModelVibratorVisitorFactory classes dominating. I've heard a number of experts on "the Java issue" argue that Java's biggest problem is the community, and that comes directly from the fact that good programmers don't want to deal with the bastardization of OOP that has entrenched itself in mainstream corporate development. You have a lot of people who trained up as "Java programmers", haven't seen a command line ever, and have no clue how the computer actually works. Most of them have never actually written a program; they just write classes and some Senior Chief Architect (who makes $246,001 per year and hasn't written a line of code since the 1990s) figures out how to stitch them together, and then tells some other clueless junior how to implement the glue in the gutshot hope that one will actually have the talent to make an actual working program out of the mess.

This isn't inherent to the JVM, because Clojure (currently hosted on the JVM, although its endgame seems to be language-agnosticism) has a radically different (and better) community. Scala's community is more mixed, but the top Scala engineers (the ones making tools like Spark and Kestrel) are really fucking good.

[…]

So, the short answer is: it's mostly not about the underlying platform (which is generally of high quality) and its only partly about the language (which is mediocre but can't be blamed directly for community problems). It goes a lot deeper than that, and not all of it is Java's fault. However, Java takes its share of the blame for its clear favoritism toward large programs (technically, established by its long startup time) and by its support of a very ugly (and counterproductive) variety of object-oriented programming.

Stanley Idesis:

I’ve read the popular answers and will claim no contest to a majority of the complaints I found. […] However, here’s why I love it. […]

In the Android world, developers use Java because they have to. However, much creativity has sprung from that limitation. People have written many excellent OS libraries in Java for Android. The platform continues to be a hotbed of Java activity which developers contribute to from all over the world. […]

Android’s open nature, wide distribution, and powerful support from Google are almost enough to make Java hip again. […] Sure, some may argue that other runtimes and languages can enable the same capabilities of the device, but for whatever reason, Android chose Java.

Until Google changes its mind, Java will remain relevant and the source of fascinating developments in the software community as long as Android remains popular.

Inside the mind of a master procrastinator

Tim Urban about rehearsing for his TED Talk:

I’ve mentioned before that we all have this problem where we’re weirdly obsessed with what other people think of us, so it makes sense that public speaking should be our collective phobia.

But then we also live in a world where public speaking can happen to any of us at any time.

[…]

"Hey TED Staff,

I've decided to do my talk on procrastination. It's the thing I'm best at.

Best,

November Tim"

[…]

[My rehearsal] was three days before my talk—and it was pretty rough, confirming to me and everyone present that I was officially not a fraud when it came to my topic. The irony of a guy rehearsing his TED Talk about how he’s a bad procrastinator, and being clearly underprepared while doing so, was not lost on anyone.

UPDATE: and the talk is up on TED. No surprise: it's great.

Invention Centers

Alan Kay:

Invention centers are 20 to 40 people doing odd things. Innovation is the process of taking something that’s already been invented and packaging it nicer.

Problem-finding is about how to get something out of almost nothing in some new area. You're by definition not doing something incremental. There’s a lot of playful stuff going on. The probability of a good idea is pretty low. Most of the ideation that happens [in an invention center] are things that get rejected, which is normal in this line of work. Very few people understand that.

Later:

The shortest lived group at Xerox PARC was "Office of the Future," because Xerox executives would not leave them alone.

I chose the most innocuous name for my own group, the Learning Research Group. Nobody knew what it meant, so they left us alone to invent-object oriented programming and the GUI.

So weird that something like CDG is backed by SAP.

(via Avdi Grimm)

The sad state of web app deployment

Eevee on Fuzzy Notepad:

We’ve been doing this for 20 years. We should have this by now. It should work, it should be pluggable and agnostic, and it should do everything right. […]

Instead, we stack layer after layer of additional convoluted crap on top of what we’ve already got because we don’t know how to fix it. Instead, we flit constantly from Thin to Mongrel to Passenger to Heroku to Bitnami to Docker to whatever new way to deploy trivial apps came out yesterday. Instead, we obsess over adding better Sass integration to our frameworks.

And I’m really not picking on Ruby, or Rails, or this particular app. I hate deploying my own web software, because there are so many parts all over the system that only barely know about each other, but if any of them fail then the whole shebang stops working.

See also: Something Slightly Less Terrible.

Compilation by a thousand semicolons and commas

Something I like in most modern programming languages is that they tend to ditch semicolons completely.

Semicolons are a nuisance, a waste of time, a source of (easily fixable) errors, ugly and unnecessary.

Compilers can easily work without them. Keeping them in old languages is, most of the time, simply a question of backward-compatibility.

And that's fine, after all. In modern languages though, this is something that must be avoided. It's wrong and useless.

Consider this snippet of Swift:

var greeting = "Hello!"

if let name = optionalName {
    greeting = "Hello, \(name)"
}

No semicolon. Great. Now, consider a similar snippet of Go:

func sqrt(x float64) string {
    if x < 0 {
        return sqrt(-x) + "i"
    }

    return fmt.Sprint(math.Sqrt(x))
}

It's pretty nice, isn't it?

Go does use semicolons in its grammar. However, it's the lexer's role to add them, not the developer. Let the machines do the shit work.

Here's the extract from Effective Go:

Like C, Go's formal grammar uses semicolons to terminate statements, but unlike in C, those semicolons do not appear in the source. Instead the lexer uses a simple rule to insert semicolons automatically as it scans, so the input text is mostly free of them.

And unlike in an extremely popular language, Go imposes some formatting rules to its programmers and nobody has a problem with ASI.

However, there's still one perfectly valid case where semicolons are not optional:

if v := math.Pow(x, n); v < limit {
    return v
}

Between math.Pow(x, n) and v < limit, you will find a semicolon. Because the two statements are on the same line, the compiler needs a semicolon to know where to end a statement and start the next one.

Basically, the rule is simple:

  • Do not use a semicolon when a statement ends with a newline.
  • Use a semicolon when a statement is followed by another statement on the same line.

That's the basic consensus in new languages.


So, here's my question: why don't we do the same thing with commas?!

Let's look at the following snippet of Swift:

var test = ["a", "b", "c"]

As you can see, each value of the array is separated by a comma. Same rule as the semicolons: if you are on the same line, use a separator. In this case, it's a comma instead of the semicolon because the semantic is different. Fine.

Now, let's look at the same snippet with a multi-lines notation — this example might seem contrived, but this kind of notation is clearer when the values are more complex and/or numerous:

var example = [
  "a",
  "b",
  "c"
]

Wait. What. It's exactly like in the one-line declaration just above. Which is not surprising, because almost any language will do that.

That's inconsistent though. For semicolons, we omit them when there's a newline, but here, for a comma, we do not.

The correct syntax should be:

var test = [
  "a"
  "b"
  "c"
]

Simpler, prettier. And it gives a few bonuses beyond the cosmetic aspect:

  1. It's less error-prone.
  2. It's more practical for versioning tool's diffs:

    Diff

  3. It's consistent with the semicolon behavior.

I would apply the same rule to functions definition, maps or anything which needs commas, like I wrote in this example:

// One-line, comma.
func test(x: Int, y: Int) {}

// Or:

// Multi-lines, no comma.
func test(
  x: Int
  y: Int  
) {

}

In Go, those notations are not frequent because the formatting rules are stricter. However, in a struct definition, you must write this:

type Point struct {
  X int,
  Y int,
}

(Note the trailing comma on the last line.)

It's better than what Swift does (at least, diffs are not impacted and you don't have to think to add or remove a comma — you just have to).

But why did they choose to keep the commas? No comma at all would have been way better AND coherent with the semicolon rule.

And you know what is even more inconsistent? This, which is perfectly valid Go code:

var (
  x = 1
  y = 2
  z = 3
)

Yep, this time, there is nothing at all.

What makes an indie hit?

"What Makes an Indie Hit?: How to Choose the Right Design" by Ryan Clark (of Crypt of the NecroDancer) is an interesting read:

In this industry it's difficult to go far without learning from others. But from whom should we learn? I think it is wisest to study developers who have been repeatedly successful. Each time a developer creates another successful game, it becomes less and less likely that their repeated successes have been due to luck. Only a minuscule fraction of indie games break even, so what are the odds of developers like Jamie Cheng, Edmund McMillen, and Cliff Harris stringing together a number of successful games? The odds are low. There must be something other than luck at work! So perhaps these folks (and many others like them!) are the ones you should be studying and listening to.

I generally agree with the article (read it!), but there is one point that bothers me.

If you want people to remember your game, to talk about your game, to write articles about your game, etc, it needs to have a hook. Preferably multiple hooks!

Ok. This is spot on. However, and while I think that innovative games are important and crucial to explore original ideas and designs, not every game can be a new experience.

We need fun and excellent games in existing genres. Having brilliant roguelike, RTS, shmup, point & click, you name it… is equally important. And in this case, how do you hook people?

What we wanted to do with Steredenn (my game) is an excellent shmup-roguelike. We don't want to reinvent a genre or to experiment. We just want to make a great, fun, polished and addictive game. That's all. There's a market for a game like that — but we struggle to stand out of the crowd (and we tried many things like articles, arcade cabinet, tutorials, shows, devblog, contacts, press, etc.).

If we had followed the "hooks rule", we would never have made Steredenn.1 But I truly think there's a place and an audience for this kind of games.

tl;dr: Should every game have a hook, a "gimmick", a unique twist? I don't think so.


To be fair, I want to discuss of something else:

If you are unsure of the strength of your game's hooks, test them! With NecroDancer we did this by putting out a very early teaser trailer, and by demoing at PAX.

"Demoing at PAX" is not something that everyone can afford. I'm all for the "try your game in real as soon as possible" mantra, but going to a big show is out of scope for almost any indie. It costs a lot just to present your game — and I don't even count the transportation, the food and hostels.

When we went to Rezzed, we spent about £2000. It was close to us so the travel was very cheap.2 Rezzed is also no PAX or Gamescon.

With our finances, we can afford one or two big exhibitions a year, if we are lucky. I don't think we are the only one in this position. I even think that we are in a privileged situation compared to many indies.

I don't want to fool myself: big shows are very important to get feedbacks, coverage and press. And this is something that many indies can't do at all. Languages, geography, finances — this is not an easy problem.

And as Ryan tells us a little bit later:

It is common these days for devs to downplay the importance of festivals, awards, and even of press. I disagree. Sure, accolades and reviews themselves may not drive mountains of sales, but most people need to hear about your game from numerous sources before they'll actually watch a trailer or buy the game.

Exactly. :)

To Ryan's credit, he's also ending his article with:

I know how difficult it is to succeed as a new indie dev, and I am aware of the advantages that I have due to experience, connections, and reputation.

Thanks for the tips, anyways!


  1. Well, we do try a few novel ideas in Steredenn, so it's not a pure "plain-old-game". But the essence of the game is a direct action-packed game. That's our main focus. 

  2. France to England. 

Quantum post-mortem

Fascinating post-mortem of Quantum, by its designer, Eric Zimmerman:

Quantum is a strategy game for two through four players that incorporates elements of tactics, resource management, and empire building. Players each begin the game with a small fleet of three starships on a customizable map built of modular tiles. Your goal is to expand by building Quantum Cubes on planets, along the way growing your fleet and evolving its abilities.

[…]

Another element that opened up the possibilities of the game during this early prototyping period was the addition of special power cards. While I was working on it, I remember seeing a talk at the Game Developers Conference by Rob Pardo, one of the lead designers of World of Warcraft. According to Rob, one of the philosophies at his company Blizzard was to make the player feel overpowered. According to Rob, special abilities and power-ups should feel mighty and spectacular, rather than just being some kind of incremental stat improvement.

As I started adding the advance cards to the game, I tried to have them embody this approach, especially when it came to the permanent card powers. My goal was that every card should feel incredibly powerful – a potential game winner in the right situation. I loved seeing my playtesters' faces grow greedy as they read the cards, astonished at how good the powers seemed to be.

The challenge of powerful cards, of course, is balancing them. […]

Designing the right mix of cards came down to good old-fashioned balancing and testing. There are a number of heuristics I tried to use in designing the cards to be balanced. For example, no card simply gives players an extra fourth action each turn; instead, some cards give you an extra action, but that action is limited, such as a free move that is only one space. There are also cards that do give you a completely open-ended extra action, but only if certain conditions are met, such as if you have more ships on the map than any other player.

[…]

It seemed hopeless; I had designed a game with a combat system that no one wanted to use! The solution to this problem was to directly link combat to winning the game.

[…]

One strong focus of our work was the language of Quantum: the terms in the rules, the names of the cards and units, and the other words we used in the game. In early prototypes, the titles of the advance cards were more technological: "Ferocious", the card that gives you a combat bonus, was originally called "Armor", while "Energetic", which gives you a movement bonus, was titled "Propulsion".

The names of the card were clear, but they felt generic. Thinking about how to emphasize the player-as-commander, I changed the names from nouns to adjectives. Rather than describing the player's fleet, the cards now described the player. Instead of "Fuel", "Evasion", and "Engineering", the cards now had names like "Brilliant", "Cruel", and "Stubborn".

Quantum is one of the best boardgames I recently played. It was unavailable for a year but I finally got my hand on the re-edition. The post-mortem shows all the history behind the game and how some concepts were found and implemented.

The game is really fast, complex and deep. There's a bit of chance in the game through the roll of the dices (obviously), but I don't think it matters that much in the end. Your decisions are way more important than your luck.

Quote — CGP Grey, the rule of two

The rule of two is that two is one — and one is none. This is applicable to so many things in your life. As a starting point, I often like to think of the rule of two with things that you have around the house. So, for example, if you have one roll of toilet paper, you really don't have any toilet paper. Because when that one runs out, you're in trouble. So you really need two rolls of toilet paper at all time. It's a redundancy rule, basically. It's where this comes from.

[…]

This is one of my little pieces of advice for trying to run a life very smoothly. It's that, everything that you can possibly have two of, you should have it. Two shampoo bottles, two bottles of vitamins, two boxes of cereal, two cartons of eggs. You want duplicates of everything. And then, when you're down to one of those things, that's the sign that you need to buy the next one. In this way, you are never out, you're never out of anything.

[…]

It's applicable to everything in your whole life, everything that's important.

CGP Grey

(Emphasis mine)

This is so spot on. I try to function like that, but I never put words onto this behavior. Here it is.

Grey continues:

Think this way with computer files: you have only one copy of that file, guess what? You have no copy of this file. I even think it's applicable to work. If you have one source of income, in many ways, it's like you have no source of income. Because if something happens with your main job, you are in lots and lots of trouble. One source of income, no source of income.

Dead or finished libraries?

Twelve Views of Mark Jason Dominus:

I released the Text::Template module several years ago, and it was immediately very successful. It's small, simple, fast, and it does a lot of things well. At the time, there were not yet 29 templating systems available on CPAN.

Anyway, the module quickly stabilized. I would get bug reports, and they would turn out to be bugs in the module's users, not in the module; I would get feature requests, and usually it turned out that what the requester wanted was possible, or even easy, without any changes to the module. Since the module was perfect, there was no need to upload new versions of it to CPAN.

But then I started to get disturbing emails. "Hi, I notice you have not updated Text::Template for nine months. Are you still maintaining it?" "Hi, I notice you seem to have stopped work on Text::Template. Have you decided to abandon this approach?" "Hi, I was thinking of using Text::Template, but I saw it wasn't being maintained any more, so I decided to use Junk::CrappyTemplate, because I need wanted to be sure of getting support for it."

I started wondering if maybe the thing to do was to release a new version of Text::Template every month, with no changes, but with an incremented version number. Of course, that's ridiculous. But it seems that people assume that if you don't update the module every month, it must be dead. People seem to think that all software requires new features or frequent bug fixes. Apparently, the idea of software that doesn't get updated because it's finished is inconceivable.

I blame Microsoft.

I must confess that I tend to look at the date of the last commit when I choose a library over another (along with other metrics, hopefully).

Well, I also think that a "perfect library" might not exist, but the author is making a really good point in his talk.

(via What if we had more finished libraries?)

The web, the ads and the ad-blockers

Like almost everyone else, I don't like ads. Nevertheless, I don't use an ad-blocker either.

Using the web and using an ad-blocker is hypocrite. Like it or not, it's the most widely used business model on the web. That's how most people get paid for the content you read, watch and listen for free.

There're other ways to do it (affiliate links, feed sponsorships, paywalls, etc.), but the dominant model today is ads.

You know what? I don't find ads particularly irritating from now on. Why? Because I find and read honest websites that respect me by not throwing huge walls of ads to my face. I avoid those ad-filled websites, because most of the time, they are simply bad. And for YouTube? I patiently wait because the great contents made there that I want to see deserve to be paid.1

The solution is not ad-blocking. The solution is to find well-made contents which focus on the readership, not the publisher.2


Let me finish with this:

There is a huge irony in that fact that AdBlock's function of keeping ads away from our content will eventually do the opposite. The alternative to ads alongside my content is ads inside my content.

Let's face it: paywalls don't work. The alternative on the horizon is native advertising. Buzzfeed is now famously refusing to host ads. Instead they sustain themselves by publishing content that subtly supports the agenda of any company with deep enough pockets to pay for it. A viewer's ability to distinguish between native ads and regular articles is small and quickly vanishing. If separate ads stop reaching people, the path to monetization remaining is to change your content to reflect someone else's agenda.

unholiness on Hacker News.

Because you cannot distinguish an ad from an article, it's even more insidious and dishonest. And this time, you won't be able to block it.


  1. Ads in apps? I pay to remove them when I can or I find better paid alternatives. 

  2. And if you really want to use AdBlock, only block abusive websites

Force Touch Affordance

Force Touch is an impressive piece of technology. It currently equips the Apple Watch and the new Macbooks. You press firmly on the screen and the haptic engine will register a "deep" press.

On the Watch, I found that the feedback was not properly mapped to the press (it's a tap on your wrist, not on your finger). I didn't use the Watch a lot, but I've also found that it was sometime hard to trigger.

On the new Macbooks' trackpad, the feeling is fantastic. You can push multiple times and sense the force you apply like it's real. You may not even understand that it's not a true physical button.

But… Force Touch is not discoverable.

There is no visual clue that Force Touch is available — either on the screen or on a specific control. It's like the menu button of the old Android versions: you tap somewhere (the menu button on Android, firmly with Force Touch) and something may happen. Or not. It's confusing and it's one more hidden gesture with no affordance. It's no better than a long press if its sole purpose is to be another context menu.

Indeed, on the Apple Watch, Force Touch's goal is to show a secret menu with additional actions. I understand the purpose — the screen is small and you can't fit everything on the screen. But this exact same behavior on a bigger display will have the same problem that Android had with its menu button. It's even worse than a hamburger menu, because the icon won't be there to tell you that an additional layer is present.

On OS X, I found that the usage of Force Touch is better: it's not a way to show secondary actions, but just a "deeper version" of an action. In QuickTime, for example, you press firmly on the forward button and the video will fast forward faster for each level you reach. It's still hard to know when it's available and might result in nothing, but at least, it's consistant with its source.

Space Odyssey

I've finally convinced myself to watch 2001: A Space Odyssey.1

N.B.: If you haven't seen it, I don't think you should read what's coming. Spoilers ahead.

Well. I didn't love the movie (the pace is really slow, even for a movie from the sixties, and not being a fan of classical scores might not have helped). I don't mind if a film is ambiguous or doesn't explain much, but 2001 may have gone a bit too far.

I get that the monoliths are alien artifacts (or an alien race). I get that David Bowman has been translated in a sort of zoo. I vaguely understood that the Star Gate is a kind of allegory about the human reproduction (there's even a foetus and an umbilical cord during this passage — and the Star Child coming back to earth can also be seen as a birth). But why? Why does an extraterrestrial race would want to improve another race? Why use monoliths? Why would they open the gate near Jupiter and not on the moon?

The HAL sequence was perfectly clear however, even if they don't explain why it was malfunctioning (it doesn't matter). An interpretation talks about the duality between being a perfect computer and having to lie to the crew about the origin of the mission — this would create a paradox inside the machine. I like that idea.

What, in the end, annoyed me the most were the expections I had. I always thought that the story would be phenomenal. And… it's not, really. It's pretty simple as soon as you think about it (not having explanations don't change that fact — it just wraps the movie in more mysteries). I had the same reaction after finishing Blade Runner, a few years ago.

It's not the movie's fault: every awesome idea it invented has been reused everywhere since.


I was pleasantly surprised to find that the movie does not feel that dated: the images are still beautiful and the depiction of space is really well-done (especially for its time).

What I loved about the movie is some of the possible interpretations. I'm really eager to read the novel by Arthur C. Clarke to dive a bit more into the story. Especially for this theory:

Arthur C. Clarke's theory of the future symbiosis of man and machine, expanded by Kubrick into what Wheat calls "a spoofy three-evolutionary leaps scenario": ape to man, an abortive leap from man to machine, and a final, successful leap from man to 'Star Child'.

I've also found a great quote by Stanley Kubrick about intelligent life when I was reading articles about the movie:

I will say that the God concept is at the heart of 2001 but not any traditional, anthropomorphic image of God.

I don't believe in any of Earth's monotheistic religions, but I do believe that one can construct an intriguing scientific definition of God, once you accept the fact that there are approximately 100 billion stars in our galaxy alone, that each star is a life-giving sun and that there are approximately 100 billion galaxies in just the visible universe. Given a planet in a stable orbit, not too hot and not too cold, and given a few billion years of chance chemical reactions created by the interaction of a sun's energy on the planet's chemicals, it's fairly certain that life in one form or another will eventually emerge.

It's reasonable to assume that there must be, in fact, countless billions of such planets where biological life has arisen, and the odds of some proportion of such life developing intelligence are high.

Now, the sun is by no means an old star, and its planets are mere children in cosmic age, so it seems likely that there are billions of planets in the universe not only where intelligent life is on a lower scale than man but other billions where it is approximately equal and others still where it is hundreds of thousands of millions of years in advance of us.

When you think of the giant technological strides that man has made in a few millennia—less than a microsecond in the chronology of the universe—can you imagine the evolutionary development that much older life forms have taken? They may have progressed from biological species, which are fragile shells for the mind at best, into immortal machine entities—and then, over innumerable eons, they could emerge from the chrysalis of matter transformed into beings of pure energy and spirit.

Their potentialities would be limitless and their intelligence ungraspable by humans.


  1. It was one of the greatest shame of my movie culture. Especially when considering the fact that I love science-fiction books and films. 

Perspectives on wearables

For a few reasons, I'm considering whether or not I should be buying an Apple Watch.1

For many people I know, it's another useless gadget.

That's fine: most people are sceptical about new technologies, after all. It's in our nature: human doesn't like change (and new things) until it reaches a certain threshold and becomes acceptable. And to be fair, it might just be a fad.

The iPhone (and other touch-based smartphones) were also considered as useless gadgets in their beginnings too — almost everybody has a smartphone now, right?2

But like the iPhone, there's a chance the Apple Watch will succeed and become a thing (that's a big if, but I'm still more inclined to believe it will than not).

For the moment, I don't think there're many usages that make a watch (or other wearable devices) truly better than a phone. Hopefully, we are only at the beginning. To understand what we can do as developers, we have to use those objects daily.

That's not to say they are completely useless as of today.

The usages that intrigue me the most are:3

But not notifications.


I'm a huge proponent of disabling almost all notifications.

I restrict the apps that can send notifications to the bare minimum (SMS, calendar, reminders and that's most of it) because I think that notifications are a nuisance.

You don't need to know when someone followed you on Instagram or Twitter. You don't need to be interrupted when you receive a mail. That's also why I'm even more aggresive against sound for notifications (I do receive my mail every hour, but silently). I'd rather grab my phone and act willingly on something than being always distracted.

People who are flooded by notifications (like tech reviewers) and criticize a device because of that ARE the issue, not the device. A wearable will only make their problem worse, because they will also allow access to those nuisance to a device that is physically connected to their body. And which can tap them anytime.


What is really making sense is to have quick pertinent contextual information available at a glance.

Like, when your train is ready to leave — which platform, what seat? That's useful: you are in a urge, you walk quickly and taking your phone out of your pocket takes time and precision.

Having your itinerary directions given to you by a few vibrations? That's great too.

Having a full-fledged Twitter app on your wrist? Useless. Getting your Facebook likes instantly? Useless.4


I'm interested in wearables (when they are not socially awkward) because of what they might become. What they are today is just a glimpse of the potential of such devices.

Imagine the application in the health field: with sensors on your body, it could be really easy to know and track diseases. To go there, we need to start small. For the moment, that means a watch with an heartbeat sensor and 18 hours of battery life.


  1. I probably won't, for the moment. Priorities

  2. And that's even the case in developing countries: 

    Across all of this, and far more important, we are now well on our way to having some 3.5bn to 4bn people on earth with a smartphone. […] For the first time ever, the tech industry is selling not just to big corporations or middle-class families but to four fifths of all the adults on earth - it is selling to people who don’t have mains electricity or running water and substitute spending on cigarettes for mobile.

    "New questions in mobile" by Benedict Evans.

  3. The watch feature is probably the less important feature of a smartwatch. It's handy like it is on a phone. But like the phone feature of a smartphone, it's not the decisive capability. 

    Go on people, make jokes about a watch that is only able to tell the time for a day before running out of battery. ;)

  4. But that's also true for the web or an app, to be honest. 

Origami 2.0

Facebook released Origami 2.0 yesterday… and wow, that's a huge update.

I've already spoken about Origami one year ago. I didn't have the chance to use this software that much since, but I'm still keeping an eye on it. And I'm really impressed by the 2.0 version.

Brandon Walkin on Medium:

Today, we’re excited to release Origami Live for iOS along with a major new version of Origami for Mac. Origami Live is a new app that lets you use your Origami prototypes on your iPhone or iPad. Alongside it, we’re releasing Origami 2.0, which has a ton of new features, including code exporting, powerful gesture support, Sketch integration, presentation mode, and more.

I've watched the new video tutorials on the site and tested Origami Live (a companion app that displays your prototype right on your phone). It worked flawlessly and the results were impressive. The app is smooth and perfectly synchronized with the desktop viewer.

I didn't try the Sketch integration, but if it works as advertised, that's a big win for my workflow.

They have also added a ton of new shortcuts to speed-up the app. Like if you press t between two nodes (when hovering on the output or input of a node), Origami will automatically insert a transition node between them:

Origami new shortcuts

There's a lot more that you can learn by watching the videos or reading the full list of shortcuts. The documentation is much more complete than when I tried Origami last year. At the time, it was really difficult to get into it. It looks way easier now.1

You can download Origami here (it requires Quartz Composer).


If you are interested in Quartz Composer & Origami, there's also a plugin named "Avocado" that enhances Origami. It has been updated for the 2.0 release, but I didn't use it with the latest version.


  1. I might be wrong, but I also have the impression that they removed the need for the patch inspector. It looks like you can do everything directly on the canvas now. But this may be the case since the previous versions. 

Something Slightly Less Terrible

Fantastic interview of Loren Brichter on objc.io:

The more I learn, the more terrible I think programming is. I’d love to rip everything up and start over. But you can only swim against the tide so far, so it’s sometimes satisfying to sift through the garbage and repurpose terrible technologies to make something that is slightly less terrible.

[…]

It’s not like a boat with a couple of holes that we can patch; it’s more like trying to sail across an ocean on a pile of accrued garbage. Sure, some of the stuff floats, and it keeps some other stuff from sinking. A better question might be: which parts are good? And you can only answer that if you look at a thing in isolation.