Friday 04 January 2019
1st Edition
Walden Pond
  1. The invention of ‘heterosexuality’ 16 minutes
  2. October 2015: JavaScript Iterators and Generators 13 minutes
  3. The Generalized Specialist: How Shakespeare, Da Vinci, and Kepler Excelled 13 minutes
  4. Getting to Know TensorFlow 12 minutes
  5. Tropical Treats Tasting Time Part One: February in Florida 10 minutes
  6. How to poop like an astronaut 12 minutes
  7. On Washington’s McNeil Island, the only residents are 214 dangerous sex offenders 11 minutes
  8. Crafting link underlines on Medium 10 minutes
  9. After Temporality 10 minutes
  10. RECONSIDER 13 minutes
  11. Power to the People: How One Unknown Group of Researchers Holds the Key to Using AI to Solve Real Human Problems 14 minutes

The invention of ‘heterosexuality’

bbc.comThursday 16 March 2017Brandon Ambrosino16 minute read
The 1901 Dorland’s Medical Dictionary defined heterosexuality as an “abnormal or perverted appetite toward the opposite sex.” More than two decades later, in 1923, Merriam Webster’s dictionary similarly defined it as “morbid sexual passion for one of the opposite sex.

The 1901 Dorland’s Medical Dictionary defined heterosexuality as an “abnormal or perverted appetite toward the opposite sex.” More than two decades later, in 1923, Merriam Webster’s dictionary similarly defined it as “morbid sexual passion for one of the opposite sex.” It wasn’t until 1934 that heterosexuality was graced with the meaning we’re familiar with today: “manifestation of sexual passion for one of the opposite sex; normal sexuality.”

Whenever I tell this to people, they respond with dramatic incredulity. That can’t be right! Well, it certainly doesn’t feel right. It feels as if heterosexuality has always “just been there.”

A few years ago, there began circulating a “man on the street” video, in which the creator asked people if they thought homosexuals were born with their sexual orientations. Responses were varied, with most saying something like, “It’s a combination of nature and nurture.” The interviewer then asked a follow-up question, which was crucial to the experiment: “When did you choose to be straight?” Most were taken back, confessing, rather sheepishly, never to have thought about it. Feeling that their prejudices had been exposed, they ended up swiftly conceding the videographer’s obvious point: gay people were born gay just like straight people were born straight.

The video’s takeaway seemed to suggest that all of our sexualities are “just there”; that we don’t need an explanation for homosexuality just as we don’t need one for heterosexuality. It seems not to have occurred to those who made the video, or the millions who shared it, that we actually need an explanation for both.

There’s been a lot of good work, both scholarly and popular, on the social construction of homosexual desire and identity. As a result, few would bat an eye when there’s talk of “the rise of the homosexual” – indeed, most of us have learned that homosexual identity did come into existence at a specific point in human history. What we’re not taught, though, is that a similar phenomenon brought heterosexuality into its existence.

There are many reasons for this educational omission, including religious bias and other types of homophobia. But the biggest reason we don’t interrogate heterosexuality’s origins is probably because it seems so, well, natural. Normal. No need to question something that’s “just there.”

But heterosexuality has not always “just been there.” And there’s no reason to imagine it will always be.

When heterosexuality was abnormal

The first rebuttal to the claim that heterosexuality was invented usually involves an appeal to reproduction: it seems obvious that different-genital intercourse has existed for as long as humans have been around – indeed, we wouldn’t have survived this long without it. But this rebuttal assumes that heterosexuality is the same thing as reproductive intercourse. It isn’t.

“Sex has no history,” writes queer theorist David Halperin at the University of Michigan, because it’s “grounded in the functioning of the body.” Sexuality, on the other hand, precisely because it’s a “cultural production,” does have a history. In other words, while sex is something that appears hardwired into most species, the naming and categorising of those acts, and those who practise those acts, is a historical phenomenon, and can and should be studied as such.

Or put another way: there have always been sexual instincts throughout the animal world (sex). But at a specific point in time, humans attached meaning to these instincts (sexuality). When humans talk about heterosexuality, we’re talking about the second thing.

Hanne Blank offers a helpful way into this discussion in her book Straight: The Surprisingly Short History of Heterosexuality with an analogy from natural history. In 2007, the International Institute for Species Exploration listed the fish Electrolux addisoni as one of the year’s “top 10 new species.” But of course, the species didn’t suddenly spring into existence 10 years ago – that’s just when it was discovered and scientifically named. As Blank concludes: “Written documentation of a particular kind, by an authority figure of a particular kind, was what turned Electrolux from a thing that just was … into a thing that was known.”

Something remarkably similar happened with heterosexuals, who, at the end of the 19th Century, went from merely being there to being known. “Prior to 1868, there were no heterosexuals,” writes Blank. Neither were there homosexuals. It hadn’t yet occurred to humans that they might be “differentiated from one another by the kinds of love or sexual desire they experienced.” Sexual behaviours, of course, were identified and catalogued, and often times, forbidden. But the emphasis was always on the act, not the agent.

So what changed? Language.

In the late 1860s, Hungarian journalist Karl Maria Kertbeny coined four terms to describe sexual experiences: heterosexual, homosexual, and two now forgotten terms to describe masturbation and bestiality; namely, monosexual and heterogenit. Kertbeny used the term “heterosexual” a decade later when he was asked to write a book chapter arguing for the decriminalisation of homosexuality. The editor, Gustav Jager, decided not to publish it, but he ended up using Kertbeny’s novel term in a book he later published in 1880.

The next time the word was published was in 1889, when Austro-German psychiatrist Richard von Krafft-Ebing included the word in Psychopathia Sexualis, a catalogue of sexual disorders. But in almost 500 pages, the word “heterosexual” is used only 24 times, and isn’t even indexed. That’s because Krafft-Ebing is more interested in “contrary sexual instinct” (“perversions”) than “sexual instinct,” the latter being for him the “normal” sexual desire of humans.

“Normal” is a loaded word, of course, and it has been misused throughout history. Hierarchical ordering leading to slavery was at one time accepted as normal, as was a geocentric cosmology. It was only by questioning the foundations of the consensus view that “normal” phenomena were dethroned from their privileged positions.

The emphasis on procreation comes not primarily from Jewish or Christian Scriptures, but from Stoicism

For Krafft-Ebing, normal sexual desire was situated within a larger context of procreative utility, an idea that was in keeping with the dominant sexual theories of the West. In the Western world, long before sex acts were separated into the categories hetero/homo, there was a different ruling binary: procreative or non-procreative. The Bible, for instance, condemns homosexual intercourse for the same reason it condemns masturbation: because life-bearing seed is spilled in the act. While this ethic was largely taught, maintained, and enforced by the Catholic Church and later Christian offshoots, it’s important to note that the ethic comes not primarily from Jewish or Christian Scriptures, but from Stoicism.

As Catholic ethicist Margaret Farley points out, Stoics “held strong views on the power of the human will to regulate emotion and on the desirability of such regulation for the sake of inner peace”. Musonius Rufus, for example, argued in On Sexual Indulgence that individuals must protect themselves against self-indulgence, including sexual excess. To curb this sexual indulgence, notes theologian Todd Salzman, Rufus and other Stoics tried to situate it “in a larger context of human meaning” – arguing that sex could only be moral in the pursuit of procreation. Early Christian theologians took up this conjugal-reproductive ethic, and by the time of Augustine, reproductive sex was the only normal sex.

While Krafft-Ebing takes this procreative sexual ethic for granted, he does open it up in a major way. “In sexual love the real purpose of the instinct, the propagation of the species, does not enter into consciousness,” he writes.

In other words, sexual instinct contains something like a hard-wired reproductive aim – an aim that is present even if those engaged in ’normal’ sex aren’t aware of it. Jonathan Ned Katz, in The Invention of Heterosexuality, notes the impact of Krafft-Ebing’s move. “Placing the reproductive aside in the unconscious, Krafft-Ebing created a small, obscure space in which a new pleasure norm began to grow.”

The importance of this shift – from reproductive instinct to erotic desire – can’t be overstated, as it’s crucial to modern notions of sexuality. When most people today think of heterosexuality, they might think of something like this: Billy understands from a very young age he is erotically attracted to girls. One day he focuses that erotic energy on Suzy, and he woos her. The pair fall in love, and give physical sexual expression to their erotic desire. And they live happily ever after.

Without Krafft-Ebing’s work, this narrative might not have ever become thought of as “normal.” There is no mention, however implicit, of procreation. Defining normal sexual instinct according to erotic desire was a fundamental revolution in thinking about sex. Krafft-Ebing’s work laid the groundwork for the cultural shift that happened between the 1923 definition of heterosexuality as “morbid” and its 1934 definition as “normal.”

Sex and the city

Ideas and words are often products of their time. That is certainly true of heterosexuality, which was borne out of a time when American life was becoming more regularised. As Blank argues, the invention of heterosexuality corresponds with the rise of the middle class.

The invention of heterosexuality corresponds with the rise of the middle class

In the late 19th Century, populations in European and North American cities began to explode. By 1900, for example, New York City had 3.4 million residents – 56 times its population just a century earlier. As people moved to urban centres, they brought their sexual perversions – prostitution, same-sex eroticism – with them. Or so it seemed. “By comparison to rural towns and villages,” Blank writes, “the cities seemed like hotbeds of sexual misconduct and excess.” When city populations were smaller, says Blank, it was easier to control such behaviour, just as it was easier to control when it took place in smaller, rural areas where neighbourly familiarity was a norm. Small-town gossip can be a profound motivator.

Because the increasing public awareness of these sexual practices paralleled the influx of lower classes into cities, “urban sexual misconduct was typically, if inaccurately, blamed” on the working class and poor, says Blank. It was important for an emerging middle class to differentiate itself from such excess. The bourgeois family needed a way to protect its members “from aristocratic decadence on the one side and the horrors of the teeming city on the other”. This required “systematic, reproducible, universally applicable systems for social management that could be implemented on a large scale”.

In the past, these systems could be based on religion, but “the new secular state required secular justification for its laws,” says Blank. Enter sex experts like Krafft-Ebing, who wrote in the introduction to his first edition of Psychopathia that his work was designed “to reduce [humans] to their lawful conditions.” Indeed, continues the preface, the present study “exercises a beneficent influence upon legislation and jurisprudence”.

Krafft-Ebing’s work chronicling sexual irregularity made it clear that the growing middle class could no longer treat deviation from normal (hetero) sexuality merely as sin, but as moral degeneracy – one of the worst labels a person could acquire. “Call a man a ‘cad’ and you’ve settled his social status,” wrote Williams James in 1895. “Call him a ‘degenerate’ and you’ve grouped him with the most loathsome specimens of the human race.” As Blank points out, sexual degeneracy became a yardstick to determine a person’s measure.

Degeneracy, after all, was the reverse process of social Darwinism. If procreative sex was critical to the continuous evolution of the species, deviating from that norm was a threat to the entire social fabric. Luckily, such deviation could be reversed, if it was caught early enough, thought the experts.

The formation of “sexual inversion” occurred, for Krafft-Ebing, through several stages, and was curable in the first. Through his work, writes Ralph M Leck, author of Vita Sexualis, “Krafft-Ebing sent out a clarion call against degeneracy and perversion. All civic-minded people must take their turn on the social watch tower.” And this was certainly a question of civics: most colonial personnel came from the middle class, which was large and growing.

Though some non-professionals were familiar with Krafft-Ebing’s work, it was Freud who gave the public scientific ways to think about sexuality. While it’s difficult to reduce the doctor’s theories to a few sentences, his most enduring legacy is his psychosexual theory of development, which held that children develop their own sexualities via an elaborate psychological parental dance.

For Freud, heterosexuals weren’t born this way, but made this way. As Katz points out, heterosexuality for Freud was an achievement; those who attained it successfully navigated their childhood development without being thrown off the straight and narrow.

And yet, as Katz notes, it takes an enormous imagination to frame this navigation in terms of normality:

According to Freud, the normal road to heterosexual normality is paved with the incestuous lust of boy and girl for parent of the other sex, with boy’s and girl’s desire to murder their same-sex parent-rival, and their wish to exterminate any little sibling-rivals. The road to heterosexuality is paved with blood-lusts… The invention of the heterosexual, in Freud’s vision, is a deeply disturbed production.

That such an Oedipal vision endured for so long as the explanation for normal sexuality is “one more grand irony of heterosexual history,” he says.

Still, Freud’s explanation seemed to satisfy the majority of the public, who, continuing their obsession with standardising every aspect of life, happily accepted the new science of normal. Such attitudes found further scientific justification in the work of Alfred Kinsey, whose landmark 1948 study Sexual Behavior in the Human Male sought to rate the sexuality of men on a scale of zero (exclusively heterosexual) to six (exclusively homosexual). His findings led him to conclude that a large, if not majority, “portion of the male population has at least some homosexual experience between adolescence and old age”. While Kinsey’s study did open up the categories homo/hetero to allow for a certain sexual continuum, it also “emphatically reaffirmed the idea of a sexuality divided between” the two poles, as Katz notes.

The future of heterosexuality

And those categories have lingered to this day. “No one knows exactly why heterosexuals and homosexuals ought to be different,” wrote Wendell Ricketts, author of the 1984 study Biological Research on Homosexuality. The best answer we’ve got is something of a tautology: “heterosexuals and homosexuals are considered different because they can be divided into two groups on the basis of the belief that they can be divided into two groups.”

Though the hetero/homo divide seems like an eternal, indestructible fact of nature, it simply isn’t. It’s merely one recent grammar humans have invented to talk about what sex means to us.

Heterosexuality, argues Katz, “is invented within discourse as that which is outside discourse. It’s manufactured in a particular discourse as that which is universal… as that which is outside time.” That is, it’s a construction, but it pretends it isn’t. As any French philosopher or child with a Lego set will tell you, anything that’s been constructed can be deconstructed, as well. If heterosexuality didn’t exist in the past, then it doesn’t need to exist in the future.

I was recently caught off guard by Jane Ward, author of Not Gay, who, during an interview for a piece I wrote on sexual orientation, asked me to think about the future of sexuality. “What would it mean to think about people’s capacity to cultivate their own sexual desires, in the same way we might cultivate a taste for food?” Though some might be wary of allowing for the possibility of sexual fluidity, it’s important to realise that various Born This Way arguments aren’t accepted by the most recent science. Researchers aren’t sure what “causes” homosexuality, and they certainly reject any theories that posit a simple origin, such as a “gay gene.” It’s my opinion that sexual desires, like all our desires, shift and re-orient throughout our lives, and that as they do, they often suggest to us new identities. If this is true, then Ward’s suggestion that we can cultivate sexual preferences seems fitting. (For more of the scientific evidence behind this argument, read BBC Future’s ‘I am gay – but I wasn’t born this way’.)

Beyond Ward’s question is a subtle challenge: If we’re uncomfortable with considering whether and how much power we have over our sexualities, why might that be? Similarly, why might we be uncomfortable with challenging the belief that homosexuality, and by extension heterosexuality, are eternal truths of nature?

In an interview with the journalist Richard Goldstein, the novelist and playwright James Baldwin admitted to having good and bad fantasies of the future. One of the good ones was that “No one will have to call themselves gay,” a term Baldwin admits to having no patience for. “It answers a false argument, a false accusation.”

Which is what?

“Which is that you have no right to be here, that you have to prove your right to be here. I’m saying I have nothing to prove. The world also belongs to me.”

Fewer than half British 18-24 year-olds identify as being 100% heterosexual

Once upon a time, heterosexuality was necessary because modern humans needed to prove who they were and why they were, and they needed to defend their right to be where they were. As time wears on, though, that label seems to actually limit the myriad ways we humans understand our desires and loves and fears. Perhaps that is one reason a recent UK poll found that fewer than half of those aged 18-24 identify as “100% heterosexual.” That isn’t to suggest a majority of those young respondents regularly practise bisexuality or homosexuality; rather it shows that they don’t seem to have the same need for the word “heterosexual” as their 20th-Century forebears.

Debates about sexual orientation have tended to focus on a badly defined concept of “nature.” Because different sex intercourse generally results in the propagation of the species, we award it a special moral status. But “nature” doesn’t reveal to us our moral obligations – we are responsible for determining those, even when we aren’t aware we’re doing so. To leap from an observation of how nature is to a prescription of nature ought to be is, as philosopher David Hume noted, to commit a logical fallacy.

Why judge what is natural and ethical to a human being by his or her animal nature? Many of the things human beings value, such as medicine and art, are egregiously unnatural. At the same time, humans detest many things that actually are eminently natural, like disease and death. If we consider some naturally occurring phenomena ethical and others unethical, that means our minds (the things looking) are determining what to make of nature (the things being looked at). Nature doesn’t exist somewhere “out there,” independently of us – we’re always already interpreting it from the inside.

Until this point in our Earth’s history, the human species has been furthered by different-sex reproductive intercourse. About a century ago, we attached specific meanings to this kind of intercourse, partly because we wanted to encourage it. But our world is very different now than what it was. Technologies like preimplantation genetic diagnosis (PGD) and in vitro fertilisation (IVF) are only improving. In 2013, more than 63,000 babies were conceived via IVF. In fact, more than five million children have been born through assisted reproductive technologies. Granted, this number still keeps such reproduction in the slim minority, but all technological advances start out with the numbers against them.

Socially, too, heterosexuality is losing its “high ground,” as it were. If there was a time when homosexual indiscretions were the scandals du jour, we’ve since moved on to another world, one riddled with the heterosexual affairs of politicians and celebrities, complete with pictures, text messages, and more than a few video tapes. Popular culture is replete with images of dysfunctional straight relationships and marriages. Further, between 1960 and 1980, Katz notes, the divorce rate rose 90%. And while it’s dropped considerably over the past three decades, it hasn’t recovered so much that anyone can claim “relationship instability” is something exclusive to homosexuality, as Katz shrewdly notes.

The line between heterosexuality and homosexuality isn’t just blurry, as some take Kinsey’s research to imply – it’s an invention, a myth, and an outdated one. Men and women will continue to have different-genital sex with each other until the human species is no more. But heterosexuality – as a social marker, as a way of life, as an identity – may well die out long before then.

Brandon Ambrosino has written for the New York Times, Boston Globe, The Atlantic, Politico, Economist, and other publications. He lives in Delaware, and is a graduate student in theology at Villanova University.

Join 800,000+ Future fans by liking us on Facebook, or follow us on Twitter.

If you liked this story, sign up for the weekly bbc.com features newsletter, called “If You Only Read 6 Things This Week”. A handpicked selection of stories from BBC Future, Earth, Culture, Capital, and Travel, delivered to your inbox every Friday.

From: http://www.bbc.com/future/story/20170315-the-invention-of-heterosexuality

October 2015: JavaScript Iterators and Generators

objectcomputing.com—ND—¯\_(ツ)_/¯13 minute read
The latest version of JavaScript defined by ECMAScript 2015 (a.k.a. ES6) adds many new features. While most are easy to understand, iterators and generators require a bit more effort. This article provides a baseline for what you need to get started.

JavaScript Iterators and Generators

by R. Mark Volkmann, Partner and Principal Software Engineer

October 2015

Introduction

The latest version of JavaScript defined by ECMAScript 2015 (a.k.a. ES6) adds many new features. While most are easy to understand, iterators and generators require a bit more effort. This article provides a baseline for what you need to get started.

This article assumes that you are familiar with several ES 2015 features, including arrow functions, classes, destructuring, for-of loop, let/const, and the spread operator. If you need to brush up on these, check out Luke Hoban’s overview at https://github.com/lukehoban/es6features, and my slides on ES 2015 at http://ociweb.com/mark. There is also a video of a talk I gave on ES 2015 here.

Iterators are objects that have a next method. They are used to visit elements in a sequence. It is possible for the values in the sequence to be generated in a lazy manner.

The next method returns an object with value and/or done properties. It’s best to return a new object from each call to next because callers might cache the object that is returned and not examine its properties until later. When the end of the sequence is reached, the done property will be true. Otherwise, this property may be omitted since it will then be undefined, which is treated as false (return {value: some-value}). For infinite sequences, the done property never becomes true.

Whether or not the value property has meaning when the done property is true depends on the iterator. For most iterators, the value property is not used when the done property is true. The three language constructs that consume iterables - the for-of loop, spread operator, and destructuring - follow this convention. These are discussed in more detail later.

When the end of the sequence has been reached, the value property may be omitted (return {done: true}).

Iterables are objects that have a method whose name is the value of Symbol.iterator. This method returns an iterator object.

An object may be both an iterable and an iterator. When this is the case, the method with the name Symbol.iterator returns an iterator when (1) it is called on the object and (2) that same object has the next method required by iterators. Therefore, obj[Symbol.iterator].call(obj) === obj.

Iterable/Iterator Example

The following example generates numbers in the Fibonacci sequence. The object referred to by the variable fibonacci is an iterable. The Symbol.iterator method returns an iterator. Use of the fibonacci variable is illustrated with a for-of loop. Note that this loop breaks out when a value greater than 100 is returned. This is necessary since the sequence is infinite.

const fibonacci = {
  [Symbol.iterator]() {
    let n1 = 0, n2 = 1, value;
    return {
      next() {
        // The next line performs parallel assignment using destructuring.
        // It is equivalent to value = n1; n1 = n2; n2 = n1 + n2;
        [value, n1, n2] = [n1, n2, n1 + n2];
 
        // The next line is equivalent to return {value: value};
        return {value};
      }
    };
  }
};
 
// Note that "let" could be used in place of "const" on the next line,
// but "const" is more correct here because each iteration
// gets a new binding for the loop variable n
// and it is not modified in the loop body.
for (const n of fibonacci) {
  if (n > 100) break;
  console.log(n);
  // outputs 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, and 89
}

Iterable Objects

Objects from these built-in classes are iterable:

  • Array - iterates over elements
  • Set - iterates over elements
  • Map - iterates over key/value pairs as [key, value]
  • DOM NodeList - iterates over Node objects (requires browser support)

Primitive strings are iterable over their Unicode (UTF-16) code points, each occupying two or four bytes.

These methods on Array (including typed arrays), Set, and Map return an iterator:

  • entries - over key/value pairs as [key, value]
  • keys - over keys
  • values - over values

The objects returned by these methods are both iterables and iterators.

For arrays, keys are indices. For sets, keys are the same as values.

Custom objects may be made iterable by adding a Symbol.iterator method. We’ll see an example of this below.

Ordinary objects, such as those created from object literals, are not iterable. When this is desired, either use the Map class or write a function like the following:

function objectEntries(obj) {
  let index = 0;
  let keys = Reflect.ownKeys(obj); // This gets both string and symbol keys.
  return { // The object returned is both an iterable and an iterator.
    [Symbol.iterator]() { return this; },
    next() {
      if (index === keys.length) return {done: true};
      let k = keys[index++], v = obj[k];
      return {value: [k, v]};
    }
  };
}
 
let obj = {foo: 1, bar: 2, baz: 3};
for (const [k, v] of objectEntries(obj)) {
  console.log(k, ’is’, v);
}

To avoid iterating over symbol keys, use Object.getOwnPropertyNames(obj) instead of Reflect.ownKeys(obj).

An alternative to the function above is to use Reflect.enumerate(obj) to get an iterable over just the keys of an object.

Iterable Consumers

There are several new language constructs that consume iterables.

for-of Loop

for (const value of someIterable) { ... } // This iterates over all values.

spread Operator

// This can add all values from an iterable into a new array.
let arr = [firstElem, ...someIterable, lastElem];
 
// This can use all values from an iterable as arguments
// to a function, method, or constructor call.
someFunction(firstArg, ...someIterable, lastArg);

Positional Destructuring

let [a, b, c] = someIterable; // This gets the first three values.

Several constructors and methods of provided classes consume iterables. The Set constructor takes an iterable over values for initializing a new Set. The Map constructor takes an iterable over key/value pairs for initializing a new Map. The Promise methods all and race take an iterable over promises.

Generators

Generators are a special kind of iterator that is also iterable. They can be paused and resumed via multiple return points, 
each specified using yield keyword. The yield keyword can only be used in generator functions. Each call to next returns the value of the next yield expression. To yield a single value, use yield value. To yield each value returned by an iterable one at a time, use yield* iterable. Note that this iterable can be another generator, or even the same kind of generator obtained recursively.

A generator exits by running off the end of the function that defines it, returning a specific value using return keyword, or throwing an error. The done property will be true after any of these and will remain true.

A "generator function" returns a generator object. These are defined using function* instead of function. Generator functions may be defined in class definitions by preceding a method name with *.

A Basic Generator

// This is a generator function.
function* myGenFn() {
  yield 1;
  yield 2;
  return 3;
}
 
let myGen = myGenFn(); // This creates a generator.
console.log(myGen.next()); // {value: 1, done: false}
console.log(myGen.next()); // {value: 2, done: false}
console.log(myGen.next()); // {value: 3, done: true}
 
for (const n of myGenFn()) {
  // This outputs 1, then 2, but not 3 because done is true for this value.
  console.log(n);
}

Note that without the return statement in the generator, the call to next that returns a value of 3 would instead return a value of undefined.

Fibonacci Generator

Earlier, we saw an example of generating numbers in the Fibonacci sequence using an iterable. We can produce the same sequence with less code by using a generator.

function* fibonacci() {
  let [prev, curr] = [0, 1];
  yield prev;
  yield curr;
  while (true) {
    [prev, curr] = [curr, prev + curr];
    yield curr;
  }
}
 
for (const n of fibonacci()) {
  if (n > 100) break;
  console.log(n);
}

This can also be implemented as an object that contains a generator method.

let fib = {
  * [Symbol.iterator]() {
    let [prev, curr] = [0, 1];
    yield prev;
    yield curr;
    while (true) {
      [prev, curr] = [curr, prev + curr];
      yield curr;
    }
  }
};
 
for (const n of fib) {
  if (n > 100) break;
  console.log(n);
}

This second approach, using an object with a generator method, is primarily useful for objects that will have multiple methods. Otherwise, the first approach, using a generator function, is preferred.

Generator Methods

Three methods on generators affect their state.

  • next

    This method gets the next value, similar to the iterator next method. It differs in that it takes an optional argument, but not on the first call. The optional argument specifies the value that the yield hit in this call will return at start of processing for the next call. It allows generators to act as data consumers.

  • return

    This method takes a value and terminates the generator from the outside, just as if the generator returned the specified value.

  • throw

    This method takes an error description (typically an Error object) and terminates the generator from the outside, just as if the generator used the throw keyword. It throws the error inside the generator at the yield where execution was paused. If the generator catches the error and yields a value, the generator will not be terminated, otherwise it is terminated.

Array Methods

The Array class defines many methods that evaluate, find, filter, and transform contained elements. It would be useful if similar functions were available for any iterable sequence. ES 2015 does not provide these, and they will likely not be provided in ES 2016. Before showing how these can be implemented, here’s a review of the relevant Array methods.

  • includes - determines whether a collection contains a given value
  • indexOf - finds the index of the first occurrence of a given value
  • lastIndexOf - finds the index of the last occurrence of a given value
  • find - finds the first element that meets some condition
  • findIndex - finds the index of first element that meets some condition
  • every - determines whether every element meets a condition
  • some - determines whether some element meets a condition
  • filter - generates a new collection of elements that meet a condition
  • map - generates a new collection of elements that are the results of passing each element to a given function
  • forEach - passes each element to a given function one at a time
  • reduce - calculates the final result of applying a given function to the previous result and the next element

star-it Library

"star-it" is a library of functions that take an iterable 
and mimics the functionality of many Array methods. The name comes from "star", for the asterisk wildcard character, 
representing the many Array methods that are mimicked, 
and "it" for iterable. This library is available in Github at https://github.com/mvolkmann/star-it. It is also available in NPM under the name "star-it" and can be installed by running "npm install star-it".

To run the tests for this library,

  1. install Node.js
  2. clone the star-it Github repo
  3. cd to the star-it directory
  4. run npm install
  5. run npm test

Next, we will walk through code from the library. We provide some examples of working with iterables and generators and reinforce what we have covered thus far. Each function is accompanied by Jasmine test assertions that demonstrate how to use the function.

Note that only the filtermapskip, and take methods make sense when working with infinite sequences (where the done property is never set to true).

The tests utilize an array (arr), three predicate functions (addisEven, and isOdd), and a class (TreeNode). Here is the code that implements these:

const arr = [1, 3, 5, 6, 7, 3, 1];
 
const add = (x, y) => x + y;
const isEven = x => x % 2 === 0;
const isOdd = x => x % 2 === 1;
 
class TreeNode {
  constructor(value) {
    this.value = value;
    this.children = [];
    this.depthFirst = true;
  }
 
  addChildren(...children) {
    this.children.push(...children);
  }
 
  // This traverses all descendants of this TreeNode,
  // depth-first if this.depthFirst = true (the default)
  // or breadth-first otherwise.
  * [Symbol.iterator]() {
    if (this.depthFirst) {
      for (const child of this.children) {
        yield child;
        yield* child; // This yields all of its children.
      }
    } else { // breadth-first
      let queue = this.children, newQueue;
      while (queue.length) {
        // Yield all nodes at current level.
        yield* queue;
        // Get all children one level down.
        newQueue = [];
        for (const child of queue) {
          newQueue.push(...child.children);
        }
        queue = newQueue;
      }
    }
  }
}

The functions in the star-it library do some verification of the types of arguments passed to them. There are verifications that an object is a function, iterator, or iterable. You may wish to study these functions to confirm your understanding of the requirements for implementing iterators and iterables.

function assertIsFunction(value) {
  if (typeof value !== ’function’) {
    throw new Error(’expected a function, but got’, value);
  }
}
 
function assertIsIterator(value) {
  const nextFn = value.next;
  if (!nextFn || typeof nextFn !== ’function’) {
    throw new Error(’expected an iterator, but got’, value);
  }
}
 
function assertIsIterable(value) {
  const iteratorFn = value[Symbol.iterator];
  if (!iteratorFn || typeof iteratorFn !== ’function’) {
    throw new Error(’expected an iterable, but got’, value);
  }
 
  // Obtain an iterator from the iterable.
  const iterator = iteratorFn.apply(value);
  assertIsIterator(iterator);
}

And now, the functions in the star-it library! A common feature of these functions is that each is fairly short and relatively easy to understand. Understanding them will take you a long way toward being able to use and implement iterables, iterators, and generator functions. The code that follows each function definition is a test snippet that demonstrates its use.

every

function every(obj, predicate) {
  assertIsIterable(obj);
  assertIsFunction(predicate);
  for (const element of obj) {
    if (!predicate(element)) return false;
  }
  return true;
}
expect(starIt.every(arr, isOdd)).toBeFalsy();

filter

function* filter(obj, predicate) {
  assertIsIterable(obj);
  assertIsFunction(predicate);
  for (const element of obj) {
    if (predicate(element)) yield element;
  }
}
let iterable = starIt.filter(arr, isOdd);
let result = [...iterable];
expect(result).toEqual([1, 3, 5, 7, 3, 1]);

find

function find(obj, predicate) {
  assertIsIterable(obj);
  assertIsFunction(predicate);
  for (const element of obj) {
    if (predicate(element)) return element;
  }
  return undefined;
}
expect(starIt.find(arr, isEven)).toBe(6);

findIndex

function findIndex(obj, predicate) {
  assertIsIterable(obj);
  assertIsFunction(predicate);
  let index = 0;
  for (const element of obj) {
    if (predicate(element)) return index;
    index++;
  }
  return -1;
}
expect(starIt.findIndex(arr, isEven)).toBe(3);

forEach

function forEach(obj, fn) {
  assertIsIterable(obj);
  assertIsFunction(fn);
  for (const element of obj) {
    fn(element);
  }
}
const visited = [];
starIt.forEach(arr, v => visited.push(v));
expect(visited).toEqual(arr);

includes

function includes(obj, value) {
  assertIsIterable(obj);
  for (const element of obj) {
    if (element === value) return true;
  }
  return false;
}
expect(starIt.includes(arr, 5)).toBeTruthy();
expect(starIt.includes(arr, 4)).toBeFalsy();

indexOf

function indexOf(obj, value) {
  assertIsIterable(obj);
  let index = 0;
  for (const element of obj) {
    if (element === value) return index;
    index++;
  }
  return -1;
}
expect(starIt.indexOf(arr, 3)).toBe(1);
expect(starIt.indexOf(arr, 4)).toBe(-1);

lastIndexOf

function lastIndexOf(obj, value) {
  assertIsIterable(obj);
  let index = 0, lastIndex = -1;
  for (const element of obj) {
    if (element === value) lastIndex = index;
    index++;
  }
  return lastIndex;
}
expect(starIt.lastIndexOf(arr, 3)).toBe(5);
expect(starIt.lastIndexOf(arr, 4)).toBe(-1);

map

function* map(obj, fn) {
  assertIsIterable(obj);
  assertIsFunction(fn);
  for (const element of obj) {
    yield fn(element);
  }
}
let iterable = starIt.map(arr, isOdd);
let result = [...iterable];
expect(result).toEqual([
  true, true, true, false,
  true, true, true
]);
iterable = starIt.map([], isOdd);
result = [...iterable];
expect(result).toEqual([]);

reduce

function reduce(obj, fn, initial) {
  assertIsIterable(obj);
  assertIsFunction(fn);
  const it = obj[Symbol.iterator]();
 
  let done = false, value;
  if (initial === undefined) {
    ({value, done} = it.next());
  } else {
    value = initial;
  }
 
  let result = value;
  while (!done) {
    ({value, done} = it.next());
    if (!done) result = fn(result, value);
  }
 
  return result;
}
expect(starIt.reduce(arr, add)).toBe(26);
expect(starIt.reduce([19], add)).toBe(19);
expect(starIt.reduce([], add, 0)).toBe(0);

some

function some(obj, predicate) {
  assertIsIterable(obj);
  assertIsFunction(predicate);
  for (const element of obj) {
    if (predicate(element)) return true;
  }
  return false;
}
expect(starIt.some(arr, isOdd)).toBeTruthy();

Here are some bonus functions that are not in the Array class but are useful when working with iterables.

skip

// This skips the first n values of an iterable
// and yields the rest.
function* skip(obj, n) {
  assertIsIterable(obj);
  const iterator = obj[Symbol.iterator]();
  let result;
 
  // Skip the first n values.
  for (let i = 0; i <= n; i++) {
    result = iterator.next();
    if (result.done) return;
  }
 
  // Yield the rest of the values.
  while (!result.done) {
    yield result.value;
    result = iterator.next();
  }
}
const gen = starIt.skip(arr, 2);
expect(gen.next().value).toBe(5);
expect(gen.next().value).toBe(6);

take

// Yields only the first n values of an iterable.
function* take(obj, n) {
  assertIsIterable(obj);
  const iterator = obj[Symbol.iterator]();
  while (n > 0) {
    yield iterator.next().value;
    n--;
  }
}
const gen = starIt.take(arr, 2);
expect(gen.next().value).toBe(1);
expect(gen.next().value).toBe(3);
expect(gen.next().value).toBe(undefined);

Summary

JavaScript iterators are cool! JavaScript generators are even cooler! Understanding these is important to fully utilize for-of loops, the spread operator, and destructuring. As seen in the TreeNode example class, it is sometimes useful to write classes in such a way that objects created from them are iterable.

References

The Software Engineering Tech Trends is a monthly newsletter featuring emerging trends in software engineering.

Subscribe

© Copyright Object Computing, Inc. 1993, 2016. All rights reserved

secret
From: https://objectcomputing.com/resources/publications/sett/javascript-iterators-and-generators/

The Generalized Specialist: How Shakespeare, Da Vinci, and Kepler Excelled

fs.blogTuesday 14 November 2017¯\_(ツ)_/¯13 minute read
“What do you want to be when you grow up?” Do you ever ask kids this question? Did adults ask you this when you were a kid?

“What do you want to be when you grow up?” Do you ever ask kids this question? Did adults ask you this when you were a kid?

Even if you managed to escape this question until high school, then by the time you got there, you were probably expected to be able to answer this question, if only to be able to choose a college and a major. Maybe you took aptitude tests, along with the standard academic tests, in high school. This is when the pressure to go down a path to a job commences. Increasingly, the education system seems to want to reduce the time it takes for us to become productive members of the work force, so instead of exploring more options, we are encouraged to start narrowing them.

Any field you go into, from finance to engineering, requires some degree of specialization. Once you land a job, the process of specialization only amplifies. You become a specialist in certain aspects of the organization you work for.

Then something happens. Maybe your specialty is no longer needed or gets replaced by technology. Or perhaps you get promoted. As you go up the ranks of the organization, your specialty becomes less and less important, and yet the tendency is to hold on to it longer and longer. If it’s the only subject or skill you know better than anything else, you tend to see it everywhere. Even where it doesn’t exist.

Every problem is a nail and you just happen to have a hammer.

Only this approach doesn’t work. Because you have no idea of the big ideas, you start making decisions that don’t take into account how the world really works. These decisions ripple outward, and you have to spend time correcting your mistakes. If you’re not careful about self-reflection, you won’t learn, and you’ll make one version of the same mistakes over and over.

Should we become specialists or polymaths? Is there a balance we should pursue?

There is no single answer.

The decision is personal. And most of the time we fail to see the life-changing implications of it. Whether we’re conscious of this or not, it’s also a decision we have to make and re-make over and over again. Every day, we have to decide where to invest our time — do we become better at what we do or learn something new?

There is another way to think about this question, though.

Around 2700 years ago, the Greek poet Archilochus wrote: “the fox knows many things; the hedgehog one big thing.” In the 1950s, philosopher Isaiah Berlin used that sentence as the basis of his essay “The Hedgehog and the Fox.” In it, Berlin divides great thinkers into two categories: hedgehogs, who have one perspective on the world, and foxes, who have many different viewpoints. Although Berlin later claimed the essay was not intended to be serious, it has become a foundational part of thinking about the distinction between specialists and generalists.

Berlin wrote that “…there exists a great chasm between those, on one side, who relate everything to a single central vision, one system … in terms of which they understand, think and feel … and, on the other hand, those who pursue many ends, often unrelated and even contradictory, connected, if at all, only in some de facto way.”

A generalist is a person who is a competent jack of all trades, with lots of divergent useful skills and capabilities. This is the handyman who can fix your boiler, unblock the drains, replace a door hinge, or paint a room. The general practitioner doctor whom you see for any minor health problem (and who refers you to a specialist for anything major). The psychologist who works with the media, publishes research papers, and teaches about a broad topic.

A specialist is someone with distinct knowledge and skills related to a single area. This is the cardiologist who spends their career treating and understanding heart conditions. The scientist who publishes and teaches about a specific protein for decades. The developer who works with a particular program.

In his original essay, Berlin writes that specialists “lead lives, perform acts and entertain ideas that are centrifugal rather than centripetal; their thought is scattered or diffused, moving on many levels, seizing upon the essence of a vast variety of experiences and objects … seeking to fit them into, or exclude them from, any one unchanging, all embracing … unitary inner vision.”

The generalist and the specialist are on the same continuum; there are degrees of specialization in a subject. There’s a difference between someone who specializes in teaching history and someone who specializes in teaching the history of the American Civil war, for example. Likewise, there is a spectrum for how generalized or specialized a certain skill is.

Some skills — like the ability to focus, to read critically, or to make rational decisions — are of universal value. Others are a little more specialized but can be used in many different careers. Examples of these skills would be design, project management, and fluency in a foreign language.

The distinction between generalization and specialization comes from biology. Species are referred to as either generalists or specialists, as with the hedgehog and the fox.

A generalist species can live in a range of environments, utilizing whatever resources are available. Often, these critters eat an omnivorous diet. Raccoons, mice, and cockroaches are generalists. They live all over the world and can eat almost anything. If a city is built in their habitat, then no problem; they can adapt.

A specialist species needs particular conditions to survive. In some cases, they are able to live only in a discrete area or eat a single food. Pandas are specialists, needing a diet of bamboo to survive. Specialist species can thrive if the conditions are correct. Otherwise, they are vulnerable to extinction.

The distinction between generalist and specialist species is useful as a point of comparison. Generalist animals (including humans) can be less efficient, yet they are less fragile amidst change. If you can’t adapt, changes become threats instead of opportunities.

While it’s not very glamorous to take career advice from a raccoon or a panda, we can learn something from them about the dilemmas we face. Do we want to be like a raccoon, able to survive anywhere, although never maximizing our potential in a single area? Or like a panda, unstoppable in the right context, but struggling in an inappropriate one?

Costs and Benefits

Generalists have the advantage of interdisciplinary knowledge, which fosters creativity and a firmer understanding of how the world works. They have a better overall perspective and can generally perform second-order thinking in a wider range of situations than the specialist can.

Generalists often possess transferable skills, allowing them to be flexible with their career choices and adapt to a changing world. They can do a different type of work and adapt to changes in the workplace. Gatekeepers tend to cause fewer problems for generalists than for specialists.

Managers and leaders are often generalists because they need a comprehensive perspective of their entire organization. And an increasing number of companies are choosing to have a core group of generalists on staff, and hire freelance specialists only when necessary.

The métiers at the lowest risk of automation in the future tend to be those which require a diverse, nuanced skill set. Construction vehicle operators, blue collar workers, therapists, dentists, and teachers included.

When their particular skills are in demand, specialists experience substantial upsides. The scarcity of their expertise means higher salaries, less competition, and more leverage. Nurses, doctors, programmers, and electricians are currently in high demand where I live, for instance.

Specialists get to be passionate about what they do — not in the usual “follow your passion!” way, but in the sense that they can go deep and derive the satisfaction that comes from expertise. Garrett Hardin offers his perspective on the value of specialists: 

…we cannot do without experts. We accept this fact of life, but not without anxiety. There is much truth in the definition of the specialist as someone who “knows more and more about less and less.” But there is another side to the coin of expertise. A really great idea in science often has its birth as apparently no more than a particular answer to a narrow question; it is only later that it turns out that the ramifications of the answer reach out into the most surprising corners. What begins as knowledge about very little turns out to be wisdom about a great deal.

Hardin cites the development of probability theory as an example. When Blaise Pascal and Pierre de Fermat sought to devise a means of dividing the stakes in an interrupted gambling game, their expertise created a theory with universal value.

The same goes for many mental models and unifying theories. Specialists come up with them, and generalists make use of them in surprising ways.

The downside is that specialists are vulnerable to change. Many specialist jobs are disappearing as technology changes. Stockbrokers, for example, face the possibility of replacement by AI in coming years. That doesn’t mean no one will hold those jobs, but demand will decrease. Many people will need to learn new work skills, and starting over in a new field will put them back decades. That’s a serious knock, both psychologically and financially.

Specialists are also subject to “‘man with a hammer” syndrome. Their area of expertise can become the lens they see everything through.

As Michael Mauboussin writes in Think Twice:

…people stuck in old habits of thinking are failing to use new means to gain insight into the problems they face. Knowing when to look beyond experts requires a totally fresh point of view and one that does not come naturally. To be sure, the future for experts is not all bleak. Experts retain an advantage in some crucial areas. The challenge is to know when and how to use them.

Understanding and staying within their circle of competence is even more important for specialists. A specialist who is outside of their circle of competence and doesn’t know it is incredibly dangerous.

Philip Tetlock performed an 18-year study to look at the quality of expert predictions. Could people who are considered specialists in a particular area forecast the future with greater accuracy than a generalist? Tetlock tracked 284 experts from a range of disciplines, recording the outcomes of 28,000 predictions.

The results were stark: predictions coming from generalist thinkers were more accurate. Experts who stuck to their specialized areas and ignored interdisciplinary knowledge faired worse. The specialists tended to be more confident in their erroneous predictions than the generalists. The specialists made definite assertions — which we know from probability theory to be a bad idea. It seems that generalists have an edge when it comes to Bayesian updating, recognizing probability distributions, and long-termism.

Organizations, industries, and the economy need both generalists and specialists. And when we lack the right balance, it creates problems. Millions of jobs remain unfilled, while millions of people lack employment. Many of the empty positions require specialized skills. Many of the unemployed have skills which are too general to fill those roles. We need a middle ground.

The Generalized Specialist

The economist, philosopher, and writer Henry Hazlitt sums up the dilemma:

In the modern world knowledge has been growing so fast and so enormously, in almost every field, that the probabilities are immensely against anybody, no matter how innately clever, being able to make a contribution in any one field unless he devotes all his time to it for years. If he tries to be the Rounded Universal Man, like Leonardo da Vinci, or to take all knowledge for his province, like Francis Bacon, he is most likely to become a mere dilettante and dabbler. But if he becomes too specialized, he is apt to become narrow and lopsided, ignorant on every subject but his own, and perhaps dull and sterile even on that because he lacks perspective and vision and has missed the cross-fertilization of ideas that can come from knowing something of other subjects.

What’s the safest option, the middle ground?

By many accounts, it’s being a specialist in one area, while retaining a few general iterative skills. That might sound like it goes against the idea of specialists and generalists being mutually exclusive, but it doesn’t.

A generalizing specialist has a core competency which they know a lot about. At the same time, they are always learning and have a working knowledge of other areas. While a generalist has roughly the same knowledge of multiple areas, a generalizing specialist has one deep area of expertise and a few shallow ones. We have the option of developing a core competency while building a base of interdisciplinary knowledge.

“The fox knows many things, but the hedgehog knows one big thing.”

— Archilochus

As Tetlock’s research shows, for us to understand how the world works, it’s not enough to home in on one tiny area for decades. We need to pull ideas from everywhere, remaining open to having our minds changed, always looking for disconfirming evidence. Joseph Tussman put it this way: “If we do not let the world teach us, it teaches us a lesson.”

Many great thinkers are (or were) generalizing specialists.

Shakespeare specialized in writing plays, but his experiences as an actor, poet, and part owner of a theater company informed what he wrote. So did his knowledge of Latin, agriculture, and politics. Indeed, the earliest known reference to his work comes from a critic who accused him of being “an absolute Johannes factotum” (jack of all trades).

Leonardo Da Vinci was an infamous generalizing specialist. As well as the art he is best known for, Da Vinci dabbled in engineering, music, literature, mathematics, botany, and history. These areas informed his art — note, for example, the rigorous application of botany and mathematics in his paintings. Some scholars consider Da Vinci to be the first person to combine interdisciplinary knowledge in this way or to recognize that a person can branch out beyond their defining trade.

Johannes Kepler revolutionized our knowledge of planetary motion by combining physics and optics with his main focus, astronomy. Military strategist John Boyd designed aircraft and developed new tactics, using insights from divergent areas he studied, including thermodynamics and psychology. He could think in a different manner from his peers, who remained immersed in military knowledge for their entire careers.

Shakespeare, Da Vinci, Kepler, and Boyd excelled by branching out from their core competencies. These men knew how to learn fast, picking up the key ideas and then returning to their specialties. Unlike their forgotten peers, they didn’t continue studying one area past the point of diminishing returns; they got back to work — and the results were extraordinary.

Many people seem to do work which is unrelated to their area of study or their prior roles. But dig a little deeper and it’s often the case that knowledge from the past informs their present. Marcel Proust put it best: “the real act of discovery consists not in finding new lands, but in seeing with new eyes.”

Interdisciplinary knowledge is what allows us to see with new eyes.

When Charlie Munger was asked whether to become a polymath or a specialist at the 2017 shareholders meeting for the Daily Journal, his answer surprised a lot of people. Many expected the answer to be obvious. Of course, he would recommend that people become generalists. Only this is not what he said.

Munger remarked:

I don’t think operating over many disciplines, as I do, is a good idea for most people. I think it’s fun, that’s why I’ve done it. And I’m better at it than most people would be, and I don’t think I’m good at being the very best at handling differential equations. So, it’s been a wonderful path for me, but I think the correct path for everybody else is to specialize and get very good at something that society rewards, and then to get very efficient at doing it. But even if you do that, I think you should spend 10 to 20% of your time [on] trying to know all the big ideas in all the other disciplines. Otherwise … you’re like a one-legged man in an ass-kicking contest. It’s not going to work very well. You have to know the big ideas in all the disciplines to be safe if you have a life lived outside a cave. But no, I think you don’t want to neglect your business as a dentist to think great thoughts about Proust.

In his comments, we can find the underlying approach most likely to yield exponential results: Specialize most of the time, but spend time understanding the broader ideas of the world.

This approach isn’t what most organizations and educational institutions provide. Branching out isn’t in many job descriptions or in many curricula. It’s a project we have to undertake ourselves, by reading a wide range of books, experimenting with different areas, and drawing ideas from each one.

Still curious? Check out the biographies of Leonardo da Vinci and Ben Fraklin

From: https://fs.blog/2017/11/generalized-specialist/

Getting to Know TensorFlow

hackernoon.comTuesday 08 November 2016Nishant Shukla12 minute read
This article was excerpted from Machine Learning with TensorFlow. Before jumping into machine learning algorithms, you should first familiarize yourself with how to use the tools.

This article was excerpted from Machine Learning with TensorFlow.

Before jumping into machine learning algorithms, you should first familiarize yourself with how to use the tools. This article covers some essential advantages of TensorFlow, to convince you it’s the machine learning library of choice.

As a thought experiment, let’s imagine what happens when we write Python code without a handy computing library. It’ll be like using a new smartphone without installing any extra apps. The phone still works, but you’d be more productive if you had the right apps.

Consider the following… You’re a business owner tracking the flow of sales. You want to calculate your revenue from selling your products. Your inventory consists of 100 different products, and you represent each price in a vector called prices. Another vector of size 100 called amounts represents the inventory count of each item. You can write the following chunk of Python code shown in listing 1 to calculate the revenue of selling all products. Keep in mind that this code doesn’t import any libraries.
Listing 1. Computing the inner product of two vectors without using any library

That’s a lot of code just to calculate the inner-product of two vectors (also known as dot product). Imagine how much code would be required for something more complicated, such as solving linear equations or computing the distance between two vectors.

By installing the TensorFlow library, you also install a well-known and robust Python library called NumPy, which facilitates mathematical manipulation in Python. Using Python without its libraries (e.g. NumPy and TensorFlow) is like using a camera without autofocus: you gain more flexibility, but you can easily make careless mistakes. It’s already pretty easy to make mistakes in machine learning, so let’s keep our camera on auto-focus and use TensorFlow to help automate some tedious software development.

Listing 2 shows how to concisely write the same inner-product using NumPy.

Listing 2. Computing the inner product using NumPy

Python is a succinct language. Fortunately for you, that means you won’t see pages and pages of cryptic code. On the other hand, the brevity of the Python language implies that a lot is happening behind each line of code, which you should familiarize yourself with carefully as you work.

By the way… Detailed documentation about various functions for the Python and C++ APIs for TensorFlow are available at https://www.tensorflow.org/api_docs/index.html.

This article is geared toward using TensorFlow for computations, because machine learning relies on mathematical formulations. After going through the examples and code listings, you’ll be able to use TensorFlow for some arbitrary tasks, such as computing statistics on big data. The focus here will entirely be about how to use TensorFlow, as opposed to machine learning in general.

Machine learning algorithms require a large amount of mathematical operations. Often, an algorithm boils down to a composition of simple functions iterated until convergence. Sure, you might use any standard programming language to perform these computations, but the secret to both manageable and performant code is the use of a well-written library.

That sounds like a gentle start, right? Without further ado, let’s write our first TensorFlow code!

Ensuring TensorFlow works

First, we need to ensure that everything is working correctly. Check the oil level in your car, repair the blown fuse in your basement, and ensure that your credit balance is zero.

Just kidding, I’m talking about TensorFlow.

Go ahead and create a new file called test.py for our first piece of code. Import TensorFlow by running the following script:

import tensorflow as tf

This single import prepares TensorFlow for your bidding. If the Python interpreter doesn’t complain, then we’re ready to start using TensorFlow!

Sticking with TensorFlow conventions

The TensorFlow library is usually imported with the tf qualified name. Generally, qualifying TensorFlow with tf is a good idea to remain consistent with other developers and open-source TensorFlow projects. You may choose not to qualify it or change the qualification name, but then successfully reusing other people’s snippets of TensorFlow code in your own projects will be an involved process.

Representing tensors

Now that we know how to import TensorFlow into a Python source file, let’s start using it! A convenient way to describe an object in the real world is by listing out its properties, or features. For example, you can describe a car by its color, model, engine type, and mileage. An ordered list of some features is called a feature vector, and that’s exactly what we’ll represent in TensorFlow code.

Feature vectors are one of the most useful devices in machine learning because of their simplicity (they’re lists of numbers). Each data item typically consists of a feature vector and a good dataset has thousands, if not thousands, of these feature vectors. No doubt, you’ll often be dealing with more than one vector at a time. A matrix concisely represents a list of vectors, where each column of a matrix is a feature vector.

The syntax to represent matrices in TensorFlow is a vector of vectors, each of the same length. Figure 1 is an example of a matrix with two rows and three columns, such as [[1, 2, 3], [4, 5, 6]]. Notice, this is a vector containing two elements, and each element corresponds to a row of the matrix.

Figure 1. The matrix in the lower half of the diagram is a visualization from its compact code notation in the upper half of the diagram. This form of notation is a common paradigm in most scientific computing libraries.

We access an element in a matrix by specifying its row and column indices. For example, the first row and first column indicate the first top-left element. Sometimes it’s convenient to use more than two indices, such as when referencing a pixel in a color image not only by its row and column, but also its red/green/blue channel. A tensor is a generalization of a matrix that specifies an element by an arbitrary number of indices.

Example of a tensor… Suppose an elementary school enforces assigned seating to its students. You’re the principal, and you’re terrible with names. Luckily, each classroom has a grid of seats, where you can easily nickname a student by his or her row and column index.
There are multiple classrooms, so you can’t say “Good morning 4,10! Keep up the good work.” You need to also specify the classroom, “Hi 4,10 from classroom 2.” Unlike a matrix, which needs only two indices to specify an element, the students in this school need three numbers. They’re all a part of a rank three tensor!

The syntax for tensors is even more nested vectors. For example, a 2-by-3-by-2 tensor is [[[1,2], [3,4], [5,6]], [[7,8], [9,10], [11,12]]], which can be thought of as two matrices, each of size 3-by-2. Consequently, we say this tensor has a rank of 3. In general, the rank of a tensor is the number of indices required to specify an element. Machine learning algorithms in TensorFlow act on Tensors, and it’s important to understand how to use them.

Figure 2. This tensor can be thought of as multiple matrices stacked on top of each other. To specify an element, you must indicate the row and column, as well as which matrix is being accessed. Therefore, the rank of this tensor is three.

It’s easy to get lost in the many ways to represent a tensor. Intuitively, each of the following three lines of code in Listing 3 is trying to represent the same 2-by-2 matrix. This matrix represents two features vectors of two dimensions each. It could, for example, represent two people’s ratings of two movies. Each person, indexed by the row of the matrix, assigns a number to describe his or her review of the movie, indexed by the column. Run the code to see how to generate a matrix in TensorFlow.

Listing 3. Different ways to represent tensors

The first variable (m1) is a list, the second variable (m2) is an ndarray from the NumPy library, and the last variable (m3) is TensorFlow’s Tensor object. All operators in TensorFlow, such as neg, are designed to operate on tensor objects. A convenient function we can sprinkle anywhere to make sure that we’re dealing with tensors, as opposed to the other types, is tf.convert_to_tensor( … ). Most functions in the TensorFlow library already perform this function (redundantly), even if you forget to. Using tf.convert_to_tensor( … ) is optional, but I show it here because it helps demystify the implicit type system being handled across the library. The aforementioned listing 3 produces the following output three times:

<class ‘tensorflow.python.framework.ops.Tensor’>

Let’s take another look at defining tensors in code. After importing the TensorFlow library, we can use the constant operator as follows in Listing 4.

Listing 4. Creating tensors

Running listing 4 produces the following output:

Tensor( “Const:0”,
shape=TensorShape([Dimension(1),
Dimension(2)]),
dtype=float32 )
Tensor( “Const_1:0”,
shape=TensorShape([Dimension(2),
Dimension(1)]),
dtype=int32 )
Tensor( “Const_2:0”,
shape=TensorShape([Dimension(2),
Dimension(3),
Dimension(2)]),
dtype=int32 )

As you can see from the output, each tensor is represented by the aptly named Tensor object. Each Tensor object has a unique label (name), a dimension (shape) to define its structure, and data type (dtype) to specify the kind of values we’ll manipulate. Because we didn’t explicitly provide a name, the library automatically generated them: “Const:0”, “Const_1:0”, and “Const_2:0”.

Tensor types

Notice that each of the elements of matrix1 end with a decimal point. The decimal point tells Python that the data type of the elements isn’t an integer, but instead a float. We can pass in explicit dtype values. Much like NumPy arrays, tensors take on a data type that specifies the kind of values we’ll manipulate in that tensor.

TensorFlow also comes with a few convenient constructors for some simple tensors. For example, tf.zeros(shape) creates a tensor with all values initialized at zero of a specific shape. Similarly, tf.ones(shape) creates a tensor of a specific shape with all values initialized at one. The shape argument is a one-dimensional (1D) tensor of type int32 (a list of integers) describing the dimensions of the tensor.

Creating operators

Now that we have a few starting tensors ready to use, we can apply more interesting operators, such as addition or multiplication. Consider each row of a matrix representing the transaction of money to (positive value) and from (negative value) another person. Negating the matrix is a way to represent the transaction history of the other person’s flow of money. Let’s start simple and run the negation op (short for operation) on our matrix1 tensor from listing 4. Negating a matrix turns the positive numbers into negative numbers of the same magnitude, and vice versa.

Negation is one of the simplest operations. As shown in listing 5, negation takes only one tensor as input, and produces a tensor with every element negated — now, try running the code yourself. If you master how to define negation, it’ll provide a stepping stone to generalize that skill to all other TensorFlow operations.

Aside… Defining an operation, such as negation, is different from running it.
Listing 5 Using the negation operator

Listing 5 generates the following output:

Tensor(“Neg:0”, shape=(1, 2), dtype=int32)

Useful TensorFlow operators

The official documentation carefully lays out all available math ops: https://www.tensorflow.org/api_docs/Python/math_ops.html.

Some specific examples of commonly used operators include:

tf.add(x, y) 
Add two tensors of the same type, x + y
tf.sub(x, y) 
Subtract tensors of the same type, x — y
tf.mul(x, y) 
Multiply two tensors element-wise
tf.pow(x, y) 
Take the element-wise power of x to y
tf.exp(x) 
Equivalent to pow(e, x), where e is Euler’s number (2.718…)
tf.sqrt(x) 
Equivalent to pow(x, 0.5)
tf.div(x, y) 
Take the element-wise division of x and y
tf.truediv(x, y) 
Same as tf.div, except casts the arguments as a float
tf.floordiv(x, y) 
Same as truediv, except rounds down the final answer into an integer
tf.mod(x, y) 
Takes the element-wise remainder from division
Exercise… Use the TensorFlow operators we’ve learned to produce the Gaussian Distribution (also known as Normal Distribution). See Figure 3 for a hint. For reference, you can find the probability density of the normal distribution online: https://en.wikipedia.org/wiki/Normal_distribution.

Most mathematical expressions such as “*”, “-“, “+”, etc. are shortcuts for their TensorFlow equivalent, for the sake of brevity. The Gaussian function includes many operations, and it’s cleaner to use some short-hand notations as follows:

from math import pi
mean = 1.0
sigma = 0.0
(tf.exp(tf.neg(tf.pow(x — mean, 2.0) /
(2.0 * tf.pow(sigma, 2.0) ))) *
(1.0 / (sigma * tf.sqrt(2.0 * pi) )))
Figure 3. The graph represents the operations needed to produce a Gaussian distribution. The links between the nodes represent how data flows from one operation to the next. The operations themselves are simple, but complexity arises in how they intertwine.

As you can see, TensorFlow algorithms are easy to visualize. They can be described by flowcharts. The technical (and more correct) term for the flowchart is a graph. Every arrow in a flowchart is called the edge of the graph. In addition, every state of the flowchart is called a node.

Executing operators with sessions

A session is an environment of a software system that describes how the lines of code should run. In TensorFlow, a session sets up how the hardware devices (such as CPU and GPU) talk to each other. That way, you can design your machine learning algorithm without worrying about micro-managing the hardware that it runs on. Of course, you can later configure the session to change its behavior without changing a line of the machine learning code.

To execute an operation and retrieve its calculated value, TensorFlow requires a session. Only a registered session may fill the values of a Tensor object. To do so, you must create a session class using tf.Session() and tell it to run an operator (listing 6). The result will be a value you can later use for further computations.

Listing 6. Using a session

Congratulations! You have just written your first full TensorFlow code. Although all it does is negate a matrix to produce [[-1, -2]], the core overhead and framework are just the same as everything else in TensorFlow.

Session configurations

You can also pass options to tf.Session. For example, TensorFlow automatically determines the best way to assign a GPU or CPU device to an operation, depending on what is available. We can pass an additional option, log_device_placements=True, when creating a Session, as shown in listing 7.

Listing 7. Logging a session

This outputs info about which CPU/GPU devices are used in the session for each operation. For example, running listing 6 results in traces of output like the following to show which device was used to run the negation op:

Neg: /job:localhost/replica:0/task:0/cpu:0

Sessions are essential in TensorFlow code. You need to call a session to actually “run” the math. Figure 4 maps out how the different components on TensorFlow interact with the machine learning pipeline. A session not only runs a graph operation, but can also take placeholders, variables, and constants as input. We’ve used constants so far, but in later sections we’ll start using variables and placeholders. Here’s a quick overview of these three types of values.

  • Placeholder: A value that is unassigned, but will be initialized by the session wherever it is run.
  • Variable: A value that can change, such a parameter of a machine learning model.
  • Constant: A value that does not change, such as hyper-parameters or settings.
Figure 4. The session dictates how the hardware will be used to most efficiently process the graph. When the session starts, it assigns the CPU and GPU devices to each of the nodes. After processing, the session outputs data in a usable format, such as a NumPy array. A session optionally may be fed placeholders, variables, and constants.

Hungry for more?

That’s it for now, I hope that you have successfully acquainted yourself with some of the basic workings of TensorFlow. If this article has left you ravenous for more delicious TensorFlow tidbits, please go download the first chapter of Machine Learning with TensorFlow and see this Slideshare presentation for more information (and a discount code).

Hacker Noon is how hackers start their afternoons. We’re a part of the @AMI family. We are now accepting submissions and happy to discuss advertising & sponsorship opportunities.
If you enjoyed this story, we recommend reading our latest tech stories and trending tech stories. Until next time, don’t take the realities of the world for granted!
From: https://hackernoon.com/machine-learning-with-tensorflow-8873fdee2b68

Tropical Treats Tasting Time Part One: February in Florida

cookingissues.comMonday 08 August 2011Dave Arnold10 minute read
If you hate things that are awesome stop reading now. If you are still reading: do whatever it takes to get to the Fruit and Spice Park in Homestead Florida, an hour south of Miami.

If you hate things that are awesome stop reading now.

If you are still reading: do whatever it takes to get to the Fruit and Spice Park in Homestead Florida, an hour south of Miami. South-Dade is the Mecca for tropical fruits in the continental US, and the Fruit and Spice Park is the public park where you can sample a bajillion of them.

I discovered the park in February when Chris Young (of Modernist Cuisine fame) and I were doing a gig at the South Beach Food and Wine Festival in Miami.

You enter the park through the gift shop, where you’ll find a table filled with whatever fruit the park staff think you should sample. Definitely sample them. If you want to want to eat more while you’re wandering in the orchards, you need to arrange a special guided tour in advance. If you show up unannounced they discourage you from eating fruit off the trees – it is a public park, and if everyone ate fruit off the trees it would be stripped bare lickety-split. Fruit that has fallen on the ground is OK to pilfer, but is often full of bugs (don’t be a wimp –eat around them) or, worse, over-ripe. We had arranged for a special tour and our intrepid guide, Rose Kennedy, picked fruit for us to taste.

Tropical fruit is weird. Forget everything you know about how fruit works –tropicals are different. Some highlights:

Canistel:

The canistel (Pouteria campehiana), which hails from Central America and southern Mexico, is a fruit I can get behind. It is picked hard and then allowed to soften off the tree into a delicious sweet fruit confection. Canistels are very dry as fruits go – similar to an avocado but without oil. The texture is likened by some to cooked egg yolks –a custard of which the canistel’s taste resembles; in fact, the canistel is sometimes called the eggfruit. It makes a fantastic ice cream or sorbet (and doesn’t require added cream). If I had easy access to canistel I would make it into ice cream all the time.

Jaboticaba, Tree Grape:

Jaboticabas (Myrciaria cauliflora, M jaboticaba and other related species) are South American fruits that look like perfectly round grapes but grow on the bark of their trees in a most peculiar way (see the picture at the beginning of the post). There are many named cultivars, but I didn’t keep a record of the ones we sampled (idiot!). Most were delicious –sprightly with a muscadine grape/musky twang to them. One version had a bit of Concord grape flavor. A variety I didn’t favor was more yellow than purple and tasted like sucking on freshly soldered circuit boards.

Sapodilla:

Many tropical fruits don’t change their appearance as they mature. Even more problematic for the picker: fruits of widely differing maturities may be found on the same tree with roughly the same appearance. Half the trick of eating tropical fruits appears to be learning when and how to pick them. Take sapodilla (Manilkara zapota), a fruit of Mexican origin. Immature sapodillas look and feel like mature ones. How do you know when they are ready to pick? Simply scratch the skin with your fingernail. If you see green, leave it where it’s been. If you see yellow-brown, take it down. Many of the sapodillas on the trees we saw showed signs of finger-scratch testing. A mature sapodilla will soften up a couple of days after it is picked.

Sapodillas: scratch the skin with your fingernail. If it shows green it isn’t ready to pick.

All the sapodilla cultivars we tasted were extremely sugary – more like brown-sugary. On their own they are sickeningly sweet. The first bite is nice, but the second has to be forced down. Add acid to a sapodilla in the form of lime, however, and you’ll want to eat them by the bushel. Fully ripened sapodillas are soft. Some of the ones we tried were smooth textured and some were a bit grainy, like a pear.

The sapodilla tree and other related species produce chicle, the original chewing gum. People called chicleros make cuts in the tree bark, collect the latex that drips out, and then boil the latex till it reaches the proper consistency – at which point it is known as chicle. Although most modern gum is now made from cheaper synthetic bases, you can still easily get chicle (try www.terraspice.com) and make your own gum — it’s great fun.

Guavas can be Interesting:

It turns out that Guavas, a fruit I had thought was uninteresting and not so tasty, range radically in flavor, size, color, and texture. My previous guava judgment is akin to judging all of apple-dom based on a supermarket Red Delicious. The most interesting guava we tried is the cas (Psidium friedrichsthalianum), a tiny super-tart guava. Your first taste makes you pucker your lips like you were sucking on a lemon –but you are compelled to take a second taste. It would make a most refreshing drink.

The Much Maligned Starfruit:

Everybody seems to have a tropical fruit they don’t like, and starfruit, or carambola (Averrhoa carambola), is one that most of my chef friends deem useless. Even though carambolas are low on flavor, I find a them quite pleasant: watery, mildy acidic, not hard –yet crunchy. I sampled around ten types of the park’s starfruit to see if there were some butt-kickers I could take home and rub in my buddies’ faces. Sadly, no. While I had some of the best starfruit I have ever had at the park, I tasted no game-changers that I could leverage into a starfruit proselytizing campaign.

Like Starfruits? They got plenty.

In case you were wondering, I save my tropical-fruit enmity for the dragon fruit (Hylocereus undatus) –one of the showiest fruits in the world. It tastes like crunchy, off-flavored water. Somebody please tell me I’m wrong and send me a good dragon fruit.

A Sapote by any Other Name:

Ask a tropical fruit expert about sapotes and they roll their eyes. Sapote, they will tell you, is a catch-all term for any sweet, roundish fruit from south of the border. You have to specify which sapote you are discussing. Who knew? Well, I want to discuss the black sapote (Diopyros ebenaster), or, as it was described to us, the chocolate pudding fruit. Pick them when they are green and when sepals are starting to pull away from the fruit, and let ‘em soften up a week or so — you’ve got a fruit that tastes like carob pudding. Everyone says chocolate pudding, but to me it tasted more like carob (remember in the 80’s when people were all trying to convince themselves that carob tasted like chocolate?)

Carob or chocolate, this fruit is pretty damn good. As good as the fruit is, the ice cream is better. All you need to do is blend and freeze, nature does the rest. In the Miami area you can purchase a commercially made black sapote ice cream from Gaby’s Farm Tropical Fruits and Ice Creams. I recommend it.

Spices and Leaves and Such:

Fresh allspice leaves (Pimenta dioica). I want them, I need them. They smell like allspice, but fresh and green. Why can’t I buy them in New York? Even more important: Lemon Allspice leaves (aka Lemon Bay Rum, Pimenta racemosus). Why had I never heard of them, and why aren’t they in everybody’s kitchen? A leaf with the aromas of allspice and lemon!

Achiote (aka Annato, Bixa orellana) –the fresh stuff. Beautiful. Don’t know if it tastes any different, but sure is purty.

The Guiana Chestnut (Pachira aquatic) is another cool tree from Cental and South America. The nuts taste somewhat of chestnuts and can by boiled, roasted or fried (yes, I fried mine).

You Have To Keep Going Back

There is no best time to visit the park. There are always some fruits in season, and many trees bear fruit throughout the year in sporadic cycles that are difficult to predict. In the tropics, where there is no killing frost, trees aren’t necessarily tied to annual cycles the way our temperate plants are. The Monstera deliciosa takes over a year to go from flower to mature fruit. The odds that you’ll get to taste one on any particular visit are low.

On our February visit Rose often described the amazing fruit of some tree, how pretty it was, how delicious — and then told us it wasn’t available just then. Salivating, we would ask when it was available, and she would usually say, “oh, you just missed it,” or “oh, in a couple of weeks.” Rose described the product of one sporadic bearer, the ice-cream bean tree (Inga edulis), as a fruit that tasted like cotton-candy-flavored ice cream. From the description, it sounded like a life-changing fruit –a fruit that could launch a thousand ships –a fruit you’d cheat your mom to get. Chris in particular was disappointed he didn’t get to taste it (here is the good news Chris: Nastassia and I tasted it on a later trip: it was good, but I wouldn’t cut off my pinky-toe for regular supply).

While the availability of some fruits are just the luck of the draw, some have definite seasons. The most important of these fruits is the Mango. Mangos are in season only in the summertime. The Park has over 140 mango varieties. I also learned that a couple of miles away from the park lies the Fairchild Farm, a division of the Fairchild Botanical Gardens, with over 400 mango cultivars — the greatest collection of mangos in the country. I love mangos, and who doesn’t? Harold McGee and I had been trying for several years to organize a mango tasting trip to India, one of the centers of mango diversity, but Nastassia and I decided that our first mango-thon should be in Florida. Stay tuned for part 2 of this post: Mango Madness.

Hey Dave, I’m Gonna be near Homestead, but don’t have time for the Park. What Should I Do?

  • Rearrange your schedule to make time.
  • Visit Robert is Here, a nearby famous fruit stand featuring loads of locally grown tropical fruits.

Hey Dave, I wanna know more. What books should I read on tropical Fruit?

I’m glad you asked.

If you go to the Fruit and Spice Park, purchase their slim guidebook at the gift shop and peruse it before heading outside.

For free reading you can’t beat the online version of Julia F. Morton’s out-of-print classic, Fruits of Warm Climates.

Published in 1987, Fruits of Warm Climates is still considered a go-to book by tropical book enthusiasts. If you want a more comprehensive list of plants, try Margaret Barwick’s Tropical & Subtropical Trees: A World Encyclopedic Guide.

The book is fantastic, but it doesn’t deal exclusively with fruit trees, and focuses on the trees themselves rather than the taste and use of the fruits. Still worth a read.

For the “Completely Useless for a New Yorker but Still Extremely Coveted” award, I present my favorite of the lot: Brazilian Fruits & Cultivated Exotics (for consuming in natura) by Harri Lorenzi, et al. Holy crap. Makes me want to move to Brazil. It contains a brain-busting array of fruits along with taste descriptions, usage, and beautiful shots of the plants and the fruits. By the way, in Brazil, an apple counts as a cultivated exotic but the rare mendubi-guaçu (the red fruit that looks like a flower on the cover) does not.

My wife occasionally tries to stanch the steady stream of new books coming into my small apartment — it must be admitted that the ones I already own are fitted into every crevice like tetris pieces. She agreed that room needed to be made for this book. All the fruits are shot on a crazy blue background with a 1×1 cm grid pattern for scale. As the parenthetical part of the title suggests, this book only deals with fruits that are consumed without preparation. From the preface to the book:

We will not address fruits or parts of fruits that need some type of preparation (cooking, roasting or seasoning), before they can be consumed, like the palm fruit known as pupunha, the pepper, the red sweetsop, the mirliton, pumpkin, Brazilian nightshade (gilo), the cucumber, the olive, the scarlet eggplant, the elephant apple, the Ceylon cinnamon, etc.

Badass. The authors have so many awesome fruits to choose from that they won’t even deign to eat a cucumber raw! The book is hard to find at a reasonable price. I got my copy from a tropical fruit website in Hawaii.

Up next: Mango Madness!

From: http://www.cookingissues.com/index.html%3Fp=5410.html

How to poop like an astronaut

theverge.comMonday 23 November 2015Jesse Emspak12 minute read
If humans are going to go to Mars, or mine asteroids, then recycling is going to matter. And that means recycling everything — including human waste.

To get to Mars, we need better space toilets

If humans are going to go to Mars, or mine asteroids, then recycling is going to matter. And that means recycling everything — including human waste.

NASA has put some effort into solving the problem, because recycling is such an essential part of building a spaceship that can get people to Mars or anywhere else. Interplanetary missions won’t be able to get supplies from Earth. Resources will be limited, and that means "closing the loop" — you can’t afford to throw away anything, not even human poop. Any spacecraft design has to take that into account.

"You have to start with a life support system and build a spacecraft around it," says Marc Cohen, president of Astrotecture, a consulting firm that specializes in space architecture.

The Story So Far

First a few facts about human poop. A healthy person produces about 128 grams of feces per day, or about 46.7 kilograms (102 pounds) in a year, according to the medical literature. For a mission to Mars that might last two to three years, a crew of six (as posited in The Martian) would generate 300 pounds of feces each.

In the Apollo era, the toilet was a plastic bag attached to the astronaut’s butt

In the Apollo era, the toilet was a plastic bag attached to the astronauts’ butts with an adhesive. Urine was collected with a condom-like device and vented to space. Famously — or infamously — the last Mercury flight in 1963 actually suffered system failures because the urine collection bag leaked. Clearly, the bags didn’t work. Floating human waste is also a health hazard, since one can inhale tiny bits of urine or feces as they float around.

Enter Don "Doctor Flush" Rethke, a retired engineer from Hamilton Standard, now UTC Aerospace Systems. Rethke goes way back with NASA; he worked on life support for the Apollo 13 mission. He designed a commode that takes in urine and feces separately. It used suction — essential because in zero-g, liquids turn to spheres and float around, and solid waste won’t just fall into the bowl. Urine was collected in a cup-like contraption, while the solid stuff was sucked into a container and exposed to the vacuum — effectively freeze-dried and compressed. "We called them fecal patties," Rethke says.

A variation of his design is on the International Space Station, with two big differences: one is that the urine is now treated so that the water can be removed and reused, and the other is that the new system doesn’t freeze-dry the feces. (The ISS recycling system also takes in moisture from the air, which is largely astronauts’ sweat and exhalations.) As for the solid waste, during the shuttle era it was just brought back. On the ISS, it’s stored in plastic or metal containers. When those fill up, astronauts load them onto a used Russian Progress vehicle, unlock it from the ISS, and let it fall to Earth to burn in the atmosphere, along with the rest of the ISS’s garbage. (Think of that the next time you see a meteor shower.)

Throwing feces out an airlock is not an option, for a couple of reasons. One is that anything jettisoned from the spacecraft won’t go very far away without a substantial push. So if you throw something outside, it will simply follow your trajectory — any waste thrown "away" would follow you all the way to Mars. Pushing it away would mean something like opening an air lock with some air still in it, to provide a kind of explosive decompression. That would waste air.

Then there’s that trajectory problem — even if the waste moves some distance away, blocks of it might drift to various points around the ship, entering unpredictable orbits. (During the shuttle and Apollo eras, it wasn’t unusual for the spacecraft to meet clouds of urine-ice crystals that had been vented previously.) Dumping out a container behind the spacecraft is, as a result, quite dangerous. "When you near your objective you’re going to make a sudden stop," says John W. Fisher, of NASA’s Ames Research Center, who has written several papers on recycling waste in space. "If you slam on the brakes, it’s going to hit you in the rear end." A pound bag of anything hitting a decelerating spacecraft can pack a lot of force.

The second problem is that some human feces — now freeze-dried in space — would probably settle back on the ship; absent a substantial push, the turds will just hang around. The poop, now in a powdery, crystalline form, would get on the windows, says Fisher. It would foul optical sensors as well. Unlike bird droppings on a windshield, there’s no way to squeegee it off.

So you have to store it, Rethke says. In the early days of the shuttle commode, they thought of refrigeration to keep the bacteria from growing. "That takes energy, and you have to back it up with a redundant system," he says.

Besides, throwing feces away is actually the last thing space crews want to do — there’s too much useful stuff in it. About 75 percent of it is water, along with bacteria from our guts and human cells. Some 80 percent of the solid mass is organic molecules, which means compounds containing carbon. About a quarter of that is bacterial biomass, another quarter is protein, another is undigested plant matter (mostly the fiber), and a smaller percentage is fat. Organic chemicals and water are like gold in space.

On Mars, human poop, at the very least, would make a good fertilizer to grow food, Rethke says. "I would put it into a mushroom patch — let Mars take care of it."

Reuse, Recycle

Human feces aren’t the only thing you need to recycle. People produce a lot of garbage. All this adds complexity to the problem of recycling and reuse. Any machines for doing that have to be light, because launching anything into orbit is pricey, thousands of dollars per pound. Those machines also have to be small, because there’s only so much room in a space module. And they have to work reliably and be easy to fix, because there’s no calling for help between Earth and Mars.

Jay Perry, lead aerospace engineer for environmental control and life support systems at NASA’s Marshall Space Flight Center, says designing such systems is complicated. Take urine, for example: separating water from urine is relatively straightforward on Earth, but in a zero-gravity environment, the situation changes.

For example, weightless astronauts’ bones lose mass and density, since there’s no loading on them. This is why current astronauts on the ISS have a strict exercise regimen. The bone mass gets excreted as calcium in then gets into the urine. That places a limit on how much water can be pulled out, because eventually the remaining stuff is a concentrated brine, "unpleasant stuff to deal with." A 2013 study by United Technologies Aerospace Systems noted that the calcium forms small kidney stones, which can clog up the valves on toilets.

Human feces pose similar challenges, both because of zero gravity and figuring out which chemicals you want to save. In addition there’s the question of the necessary energy and the complexity of the system you want to build. The United Technologies study, for example, noted that current space toilets use machines to compress the poop. That adds complexity — instead, the study proposes a manual lever, which requires no power (except that provided by the crew member’s arm).

While there are a lot of useful chemicals in poop, separating every one of them isn’t easy. Chemical toilets and septic tanks would be useless. Chemical toilets don’t really work because the very compounds used to break down waste would still need to be sent up with the astronauts. You’d also need hundreds to thousands of gallons of that blue-dyed stuff for a years-long journey, and most of it is water — effectively you’d be adding tons of water that would only be used in toilets, which isn’t very efficient. Septic tanks depend on gravity to work — and you still have to store the feces somewhere.

Rethke says he favored using natural biodegradation; simply allowing the fecal material (and whatever else — "menstrual waste, vomitus, it’s all in there") from the commode to ferment in a metal container with some activated charcoal to stop the odors. The container could release gas — almost all would be carbon dioxide — which the spacecraft’s scrubbers could handle well enough. He even built such a device. "I put it on my desk for several months," he says. "Nobody noticed." Once astronauts get to Mars, the stuff in the containers could be fertilizer. The down side is the storage — the volumes would start to add up.

weird as it may sound, poop may make for good radiation shielding

Weird as it may sound, poop may provide good radiation shielding. In space, there are two sources of ionizing radiation that could harm astronauts. One is the background of galactic cosmic rays (or GCR). The other is a solar storm, known as a "solar particle event" or SPE. Both consist of charged particles, mostly protons.

These sources of radiation are less of a problem for ISS astronauts because they are still inside the Earth’s protective magnetic field. But once astronauts leave that field, the SPE could cause acute radiation sickness, while cosmic rays increase the risk of cancer.

The most efficient shielding is solid hydrogen because the element more easily deflects flying particles. But solid hydrogen isn’t available outside of a gas giant, and liquid hydrogen is difficult to handle, needing high pressures, cryogenic temperatures, or both. The next best thing is water, which has lots of hydrogen in it, or polyethylene. Metal shielding like lead, which provides good protection against gamma and X-rays, is actually worse than no shielding at all, because the protons hit the atoms in the metal and create cascades of other particles, creating even more harmful radiation.

Jack Miller, a nuclear physicist at Lawrence Berkeley National Laboratory, along with Michael Flynn and Marc Cohen of NASA’s Ames Research Center, conducted an experiment funded by a grant from NASA to see how well human waste would work as radiation shielding. He and his colleagues couldn’t use real feces; instead they used a simulated poo made out of miso, peanut oil, propylene glycol, psyllium husks, salt, urea, and yeast. The goal was not to exactly duplicate the actual chemicals in feces; they wanted something roughly like it that held water and absorbed radiation and particles similarly.

They put it in a particle beam to see how well it absorbed the energy of flying protons. The beam was about as energetic as particles typically found in space. The fecal simulator absorbed a measurable amount of the energy, and the team found that the thickness matters. Too thin and the problem gets worse for the same reason that metals are bad shielding — the spaceborne particles make cascades. However, they were able to calculate that a fecal shield about 8 to 11 inches thick would cut down the radiation dose a lot. That was a good result, though Miller noted that the situation is more complex.

Remember, there are two kinds of radiation in outer space: the SPEs and the background radiation from cosmic rays. Cosmic rays carry five times as much energy as SPE particles do, and they’re the ones that can increase the risk of cancer. (NASA rules say the increased risk to astronauts shouldn’t be more than 3 percent above the general population.) The fecal simulator wasn’t as good at stopping those, but that was expected. "The energy of GCR is so high it will punch through just about anything," Miller says. "So you try to balance getting the risk as low as reasonably achievable."

You can’t simply put the feces in sealed bags or metal containers

Another issue is that you can’t simply put the feces in sealed bags or metal containers because the CO2 and other gases they generate could make them explode, absent some "breathing" mechanism as in Rethke’s vision of making fertilizer. So sterilizing the waste might be a good idea.

To do that, some proposed systems effectively burn the waste, without oxygen present, a process called pyrolysis. This also allows for more immediate use of the water. Advanced Fuel Research, a company in East Hartford, Connecticut, is exploring a variation called torrefaction (which takes less energy to do than straight-up pyrolysis). The waste gets heated to around 550 degrees Fahrenheit, (300 degrees Celsius). What’s left is something compact and dry, mostly carbon. At the same time it retains a lot of hydrogen.

Rethke notes one trade-off with pyrolysis or torrefaction is what to do with the leftover carbon. "If it’s a brick that’s one thing," he says. "But powder is harder." Remember there’s no gravity, so any particles are going to float around and could foul air intakes. So you’d need some way of compacting the carbon to store it.


Torrefaction has other challenges too, says Michael Serio, the president of Advanced Fuel Research. (He’s authored two papers on the subject, and has more work — involving bird and dog manure — forthcoming.) While some materials will reduce to ash, others won’t. Cotton, for example, contains hemicellulose, which doesn’t break down as well. "A cotton T-shirt would just look like a burned T-shirt," he says.

One could just make all the waste into bricks, Serio says. You take all the garbage — food wrappers, human waste, everything — and heat it up enough to melt it into a brick. This reduces volume and detoxifies the waste. That’s good for making partial radiation shields or even, Serio says, bricks for a Martian (or Lunar) habitat. Serio is working with other companies to see if there’s a way to build some kind of heated recycling into a commode itself. The big challenge would be making it compact and fast enough so that it doesn’t put the toilet out of commission for extended periods.

These recycling technologies are all promising enough. Cohen, though, expressed some frustration at the way NASA has approached funding. Cohen, a co-investigator with Miller and Ray Flynn of Ames on the radiation shielding experiments, says there has been little development beyond simple demonstrators. NASA isn’t planning a Mars mission explicitly — the closest they’ve come is a road map. "There’s been such deep cutbacks it’s difficult to get anything funded," he says.

Even so, NASA will have to come up with something if the agency is serious about going out of Earth orbit — even if only to return to the Moon. "What NASA would like is you drop a bag of poop into a canister — maybe process it right below the commode," says Serio.

Rethke added that whatever system is in place also has to have built-in redundancy and some way to fix it. Natural bacteria, he notes, do a fine job of breaking stuff down, don’t need complex machinery to operate, use no electricity, and produce some very useful chemicals in the process. (Carbon dioxide, for instance, can be "burned" with hydrogen to make methane and water.) That’s one reason he likes natural biodegradation. "It’s all about how much power to use for reclamation, versus storage, versus the weight," Rethke says. "I like to keep things simple."

Correction: Due to an editing error, the word "each" was dropped from a sentence about how much poop a crew of six en route to Mars would produce; each astronaut would produce 300 pounds of feces — not 300 pounds total. We regret the error.

From: https://www.theverge.com/2015/11/23/9775586/how-astronauts-poop-space-toilet-design-mars-iss

On Washington’s McNeil Island, the only residents are 214 dangerous sex offenders

theguardian.comWednesday 03 October 2018Emily Gillespie on McNeil Island, WA11 minute read
McNeil Island, nestled in Puget Sound, is unpopulated except for the 214 people who live at the special commitment center, a facility for former prison inmates. All men have served their sentence and yet, due to a controversial legal mandate, they remain confined indefinitely.
Calvin Malone, 67, a resident of the McNeil Island special commitment center, stands in a Buddhist meditation area he helped create at the center. Photograph: Terray Sylvester for the Guardian

A small island in the state of Washington houses a group of unlikely residents: they are all men the state considers its most dangerous sex offenders.

McNeil Island, nestled in Puget Sound, is unpopulated except for the 214 people who live at the special commitment center, a facility for former prison inmates. All men have served their sentence and yet, due to a controversial legal mandate, they remain confined indefinitely.

The only way on and off the small island is a passenger-only ferry, which makes the 15-minute trip every two hours. The ferry docks at a defunct prison on the island and a bus takes employees and visitors to the facility a few miles inland. Along the way, the bus passes an overgrown baseball field and boarded-up houses, remnants of the prison employees and their families who called the island home until the prison closed in 2011.

Few people who live in the region know about the island and its unusual residents, and even fewer know about the equally unusual law that put them there.

McNeil Island, owned by Washington state, is inhabited solely by residents of the state-run McNeil Island special commitment center. Photograph: Terray Sylvester for the Guardian

Kelly Canary, an attorney who represents some of the men confined to the commitment center, said people are often shocked when they discover that “even after [offenders have] served their time and get out of prison, they can be civilly committed and detained for the rest of their life.”

Each of the residents has previously been convicted of at least one sex crime – including sexual assault, rape and child molestation. A court has then found them to meet the legal definition of a “sexually violent predator”, meaning they have a mental abnormality or personality disorder that makes them likely to engage in repeat sexual violence.

Civil commitment centers, which exist in fewer than half of US states, are meant as a community safeguard and a means of providing treatment for the offenders. But they’re riddled with controversies. Criminal justice reform advocates fear the implications of predicting future risk and basing confinement on what someone might do.

On top of that, these costly facilities also have low release numbers, making little known about whether they are doing anything to keep communities safer.

rule

The people sent to the special commitment center on McNeil Island are called “residents”, not inmates, though it is difficult to distinguish the facility from a prison. Rows of barbed-wire fences pen the grounds and counselors check residents every hour to make sure they are adhering to the facility’s rules.

“In most ways, it’s worse because the illusion is it’s not prison,” Calvin Malone, one of the residents, told me.

During the 1970s and 1980s, Malone worked as a Boy Scout troop leader in various states across the country, as well as with an organization that works with at-risk youth. In these roles, he molested numerous boys and was convicted of sex crimes in California, Oregon and Washington.

When he entered prison, where he spent 20-plus years, he was addicted to heroin.

“I didn’t care about anything,” he said. “I put on a facade that I did, that’s how I had to navigate.”

A resident talks on the phone at the McNeil Island special commitment center. The center is home to 214 sex offenders. Photograph: Terray Sylvester for the Guardian

About a year in, he learned about Buddhism from a magazine. He started meditating and corresponding with Buddhists outside of prison. During his sentence he also underwent sex offender therapy and he said that the combination of Buddhist teachings and treatment helped him gain perspective on who he was and what he’d done.

“I convinced myself all these years that I was a great guy … I justified, I manipulated, I minimized. I did all the things an offender will do to justify behavior to myself,” he said. “[Treatment and meditation] raised my level of empathy to a point where I understood the impact of my offending behavior and the ultimate damage that was done.”

Malone said he doesn’t like talking about how he feels in terms of shame or guilt. Those emotions, he said, have more to do with how he feels about himself. Instead, he said he feels regret.

“To regret is to understand what you’ve done and the losses that have occurred because of your actions and it allows you the space to move forward so that you’re not wallowing,” he said. “I have a tremendous amount of regret.”

rule

Washington’s civil commitment center is unique, not only for its banished-to-an-island affinity, but because it was the first of its kind.

On 26 September 1988, convicted sex offender Gene Raymond Kane abducted, raped and murdered 29-year-old Diane Ballasiotes. At the time of the incident, Kane had been released from prison to a work release center.

Ballasiotes’s death, followed by two other disturbing sexual assaults by different assailants, fueled a public outcry that eventually led the governor to sign the Community Protection Act of 1990. The act was a package of laws aimed at sex offenders, including tougher sentences, a sex offender registration and the creation of a procedure that allowed authorities to indefinitely lock up sex offenders when a court believes them a continued threat to the community.

Since then, 19 other states have enacted similar civil commitment laws. There are more than 5,200 people civilly committed in the US, according to a 2017 survey of 20 civil commitment centers.

Residents walk the SCC grounds on McNeil Island. Photograph: Terray Sylvester for the Guardian

About half of the states with such laws allow the commitment of individuals who offended as juveniles. Many of those committed are diagnosed with a general paraphilia, a condition in which a person’s sexual arousal and gratification depends on behavior considered atypical or extreme.

Mental health professionals are split as to whether this diagnosis as a commitment standard is appropriate, Dr Shan Jumper, president of the Sex Offender Civil Commitment Programs Network (SOCCPN), told me. Many of the sexually violent predator evaluations for men convicted of rape are diagnosed with “paraphilia – not otherwise specified”, Jumper said. The controversy lies, he said, in the fact that the Diagnostic and Statistical Manual of Mental Disorders does not have a specific classification for adults who are sexually aroused by those who don’t consent.

Fundamentally, these laws are about predicting a person’s future risk, which comes with its own moral and philosophical dilemmas.

To do this, states use actuarial scales, which predict an offender’s risk in the same way that car insurance companies determine rates. A widely used tool, the Static-99R, produces a score based on a number of mostly unchangeable things including criminal and relationship history.

The results, along with other evidence such as expert testimony from psychologists, is presented to a judge or jury, who determine whether the offender meets the criteria.

But SOCCPN concedes that current research and actuarial tools are not designed to predict individual risk. “To some extent the criminal justice system is requiring opinions to be made, decisions to be made, that go somewhat beyond the knowledge base that we have,” Dr Michael Miner, a professor of human sexuality at the University of Minnesota and past president of the Association for the Treatment of Sexual Abusers (ATSA), told me.

Miner said that aside from the problems with risk assessment, he questions the entire civil commitment process.

“You either have a mental defect that makes it unlikely that you can control your behavior and therefore you’re not guilty by reason of insanity, or you’re responsible for your behavior,” he said.

“It seems to me that a more honest system would, at the front end, say: ‘we just think you’re a bad guy and we’re not going to let you out, we’re going to give you a life sentence.’ I’m not advocating for life sentences for sex offenders, but that seems like a more honest route.”

A resident sits on a bench. Civil confinement in Washington cost $185,136 per resident in 2018. Photograph: Terray Sylvester for the Guardian

The US supreme court has upheld the constitutionality of civil commitment statutes three times. ATSA doesn’t take an official position for or against civil commitment centers.

Civil confinement in Washington cost $185,136 per resident in 2018. That is about five times more per person than the average cost to confine one Washington prisoner in 2017, the most recent data.

Miner points out that sex offenders have a relatively low reoffense rate. Of the offenders convicted of rape and sexual assault who were released from prison in 30 states in 2005, an estimated 5.6% were rearrested for rape or sexual assault five years later, according to a 2016 study by the US Department of Justice. The same statistics for other types of crimes were much higher. Fifty four percent of property offenders were rearrested for a property crime and 33% of drug offenders were rearrested for a drug crime.

“There is a moral panic around sexual crimes and [the public believes] that these people pose an extraordinarily high level of danger,” Miner said. “To the frustration of me and a lot of other people who are trying to come up with commonsense ways of preventing sexual violence, the message that most of these people are not really all that risky isn’t something that people seem to listen to.”

rule

In addition to safety for the larger community, civil commitment centers are designed to provide sex offenders with treatment. This is usually based in cognitive behavioral therapy, which aims at challenging distorted thoughts and regulating emotions to change behavior.

In therapy at the SCC, offenders are encouraged to disclose all of their sexual deviance to help understand the scope of their problem. Clinicians then target the factors that make them vulnerable to reoffend. The ultimate goal isn’t to eliminate urges but to mitigate risk by modifying thoughts and emotions to change destructive behavior.

“We focus on what we can change,” said Dr Elena Lopez, chief of resident treatment at the SCC. “It’s different for every person. They might have their own internal personal hurdles that keep them from progressing, personality traits, motivation, acute medical conditions, stressors.”

‘I’m helping people shift and change their lives to be meaningful and safe,’ said Dr Elena Lopez, SCC’s chief of residential treatment. Photograph: Terray Sylvester for the Guardian

Though there is limited data on sex offender treatment, research shows that it is promising in terms of reducing recidivism.

All civil commitment centers offer treatment, but participation isn’t mandatory. On McNeil Island, about 62% of the residents participate in treatment.

Working with this population can be challenging but also rewarding, Lopez said, especially when taking into account the small incremental changes that happen over time.

“We as clinicians can’t expect quick change because they didn’t get here overnight. We’re talking about long histories of engaging in this type of behavior, this type of interaction, maybe even this style of seeing the world that makes it hard to keep others and themselves safe,” she said. “I take great pride in knowing that I’m keeping the community safe, but I’m also helping people shift and change their lives to be meaningful and safe.”

rule

Once someone is labeled a sexually violent predator and committed to a civil commitment center, it can be difficult for them to get released.

In most states, a person who is civilly committed has the right to an annual review, in which a court goes over each offender’s history and treatment progress to consider release.

At the SCC, offenders have a yearly evaluation by a forensic team, which reviews documents, interviews offenders and clinicians and collects results from a polygraph and penile plethysmography, a tool designed to measure sexual arousal.

Dr Holly Coryell, chief of forensic services at the SCC, said forensic evaluators are ultimately looking to answer three psycho-legal questions: does the person continue to meet the legal criteria for a sexually violent predator? Are less restrictive alternatives in the person’s best interest? Can conditions be imposed that would adequately protect the community?

Forensic evaluators answer these questions in a recommendation that is then forwarded to the court, creating the opportunity for release hearings.

But arguing that a sex offender should be released to the community can be an uphill battle.

“The state gets to say, at the beginning of trial, he is a sexually violent predator,” Canary said. “Getting the jury to be on board with [the idea that he might not be any more] is pretty difficult.”

Though offenders are encouraged to disclose everything in treatment, they sign away their confidentiality and everything revealed to a clinician can be used as evidence. Much like an alcoholic is encouraged to admit they’re always in recovery, offenders are taught that treatment is ongoing and that consistent self-monitoring is key. Canary said that many of her clients readily admit the fact that they’re always a risk to the community, a line of thinking that helps them stay self-aware.

“But then the jury hears that,” Canary said. “You have your client saying, ‘Well, sure, I’m a risk to reoffend.’ It’s something that jurors just don’t like to hear … once they hear that from your client, it’s kind of hard to put that into perspective.”

Through the Washington court process, a civilly committed person can be released to less restrictive alternatives, which typically include outpatient treatment and tight restrictions, or they can be released without conditions.

The number of people released nationally from these facilities through either avenue is historically low. On average, these facilities house about 260 people. Of the 16 states that provided release numbers to a 2017 survey of civil commitment centers, the average number of people released from a facility per year was seven. Five states released an average of less than one person per year.

The low number of people released from these facilities makes it hard to research the effectiveness of these laws and these facilities.

As for Malone, he doesn’t participate in treatment. Now in his 60s, he said he has benefited from treatment in prison and prefers to focus on other things to make life on McNeil Island better. He has led a few lawsuits aimed at improvements, one dealing with tobacco use and another calling into question the facility’s water quality. He is heavily involved in the Buddhist community and spent years petitioning to have a pagoda built. He said he enjoys meditating among the gardens that surround the ornate structure.

“I’ve accepted the fact that this could be my last stop. I could die here,” he said. “The only thing I would love to do is have the opportunity to talk more about what my future would be rather than to constantly revisit something that occurred decades ago … It makes it difficult to do rehabilitation work on yourself when you’re still stuck in that experience.”

From: https://www.theguardian.com/us-news/2018/oct/03/dangerous-sex-offenders-mcneil-island-commitment-center

Crafting link underlines on Medium

medium.designTuesday 18 March 2014Marcin Wichary10 minute read
How hard could it be to draw a horizontal line on the screen? It seems wrangling a few pixels together to stand in a file would be something computers should be pretty good at anno domini twenty-fourteen.

How hard could it be to draw a horizontal line on the screen? It seems wrangling a few pixels together to stand in a file would be something computers should be pretty good at anno domini twenty-fourteen.

One would think so, but simple things are rarely simple under the surface… at least if they are worth anything. Typography, likewise, is a game of nuance. This is a story on how a quick evening project to fix the appearance of underlined Medium links turned into a month-long endeavour.

The history

Typography was never particularly fond of underlining. Do you want to emphasize your words? Grab some italics or use bold — hell, add some more tracking in between the letters if you have none of the above. Don’t just draw a line underneath like a cavedouchebag.

Underlines and annotations in a book from 1493. Photo by Penn Provenance Project. http://www.flickr.com/photos/58558794@N07/9710243736

However, we also just described underlines’ most attractive property: it’s very easy to add them to already existing text. That makes it easy to understand why underlines were frequently used as annotations:

  • Drawing underlines, by hand, on top of already printed text — your first experience of the underline might very well have been your teacher scoffing at your bad writing in the primary school.
  • In the typewriter universe of the 20th century — with their inflexible, monospaced fonts, and typically one-coloured ribbons — underline was the only realistic method of highlighting: just backspace through what you already typed, and add a batch of underscores in the same place. (The underscores eventually migrated to the programming world to stand in for spaces in filenames or variable names… with their low placement still betraying the original typewriter roots.)
Typewriter punctuation; see underscore above the number 6

But then came the web, and underlines teamed up with Colour Blue to find a more worthwhile destiny: signifying clickable links. They might disappear one day — even Google abandoned them recently on their search pages — but I don’t believe that’ll happen any time soon.

Unfortunately, for all the advances in web typography we’ve seen during the years — better CSS properties, more support for internationalization, custom web fonts — underlines remained mostly as they were, with very little customization available to web designers.

The project starts tonight

And then things got even worse. I woke up one February morning, and saw Chrome doing this to Medium links:

Talk about the obesity epidemic! Ugly. Distracting. Unacceptable. I looked at the other browsers and while the underlines looked better there, oftentimes the improvements went only halfway. Worse, each browser rendered the underlines as they saw fit (with Firefox faring best of all):

Soon after joining Medium, I started a running document listing all the improvements to typography I wanted us to tackle in the coming months. On the day that Chrome bug manifested itself, I added the underlines… and put them at the very top as the next thing to work on.

The non-branded hammer

I find a lot of inspiration from computer history, and the creative solutions people before me came up with when dealing with the early, inflexible machines.

I took this photo a few years ago in the IBM 1401 restoration lab at the Computer History Museum in California. It’s a suitcase with all the tools necessary to operate your computer… in the 1950s. Most of the items are official and branded, even. (With one exception: the hammer. IBM claimed it shouldn’t be necessary; the repairmen knew better.)

This was the age where computers were still electromechanical, with card punchers, chain printers, tape readers, and all sorts of peripherals needing your brawn as much as they required your brains — even if your eventual goal was very cerebral: to juggle the invisible ones and zeroes.

Web design always seemed like this, too: finding convoluted, “dirty” solutions to often simple problems, using a very limited set of tools. Because no, there isn’t a simple way to tell your browser to just give you the link underlines you want. But yes, perhaps we can figure out some roundabout way.

To find a solution, however, we need to start by defining the problem.

Finding the perfect underline

The perfect underline should be visible, but unobtrusive — allowing people to realize what’s clickable, but without drawing too much attention to itself. It should be positioned at just the right distance from the text, sitting comfortably behind it for when descenders want to occupy the same space:

So, the ideal technological solution would allow us to:

  • change the width of the line (with additional half-pixel/retina support),
  • change the distance from the text,
  • change the color (even if just to simulate thinner width by using lighter grays instead of black),
  • clear the descenders,
  • (perhaps) have a separate style for visited links.

Storming our collective brains

I thought about it a bit myself, and asked some of the very smart front-end engineers around me. Collectively, this was the list of ideas we came up with:

1. As-is

Option one is always sticking with the default. But default is rarely good enough — and just like the rest of our product, we felt our readers deserved better.

2. Advanced underline CSS properties

CSS standards promise a few interesting properties, among them text-decoration-skip and text-underline-position. However, they’re not supported by most browsers and might not be any time soon.

3. Border or box shadow at the bottom

Using border-bottom to modify underlines counts among the prototypical CSS tricks from the late 1990s. It allows us to customize the colour, but the line is usually too far below the letters. A quick prototype confirmed that — the border-bottom underlines felt they almost fell in between the lines of text:

It’s possible to hack our way through this limitation and raise them up by applying display: inline-block and reducing height or line-height, but this has one deathly constraint — the link is not allowed to break to a new line. That won’t work for a regular body text. (Additionally, border-bottom won’t allow us to use anything smaller than two retina pixels.)

Another idea — applying a carefully crafted inset box-shadow on the text — has somewhat similar limitations.

4. Bespoke underline font

Italic fonts are just slanted, and bold fonts are just thicker, right? No, of course not. Italicized and bolded type comes with its own separate letter shapes, customized and meticulously tweaked to preserve the visual integrity of the font:

If we scoff at faux bold and italic, why can’t we consider a proper separate underline font, with the underline being part of each glyph?

One can dream?

This is a potentially promising, but in our case flawed approach:

  • Serving a separate font would incur a big latency penalty (and font files are already the heaviest part of an initial Medium load).
  • Since the line is “baked into” the font, we cannot really move it or change its width depending on the font size… and the colour changes are prohibited also.
  • Font licensing and serving issues can make it too complicated.
  • Kerning this might be tricky.

5. Drawing with <canvas>

Here at Medium, we’re using <canvas> in some unexpected — to me — places, for example our signature full-bleed, blurred images.

If we want to draw a line, why don’t we just draw a line? HTML <canvas> is, after all, designed for controlling individual pixels. This would allow us to have custom width, colour, even draw around descenders. However, the tricky part is lack of support for knowing when the links break — the tools to measure wrapped text precisely don’t exist or are very costly in JavaScript.

(A version of this idea is also a variant of the above, having a bespoke font with just the underline, and drawing it on top of the actual text… which we have to reject for the same reason.)

6. Background images and gradients

And thus, we arrived at our unintuitive saviour: the background image. On the surface, background images don’t seem to have much to do with underlines. However, they have an interesting property of supporting wrapped text (we use backgrounds already for highlighting notes — click on the speech bubble on your right), and they gained enough powers in the recent years to be extremely flexible.

With modern CSS, background images can be positioned and scaled exactly as needed, including support for retina pixels. And they don’t even need to be separate images requiring additional web fetches: we can provide them inline using data protocol, or — even better — synthesize via gradients (which themselves are incredibly powerful).

So that’s what our underline could conceivably be: a tiny gradient; 1 or 2 pixels tall; horizontally stretched as far as it can go; vertically carefully positioned.

7. Clearing the descenders

Unfortunately, the background image/gradient solution won’t allow us to clear the descenders. My colleague Dustin found a way, and it’s as ingenious as impractical — applying a white CSS text shadow or a text stroke to paint over the underline and simulate a gap between the underline and the text.

Unfortunately, in order to achieve the best effect, we’d need to layer many, many fuzzy text shadows on one another, and that proves to be very expensive. (CSS property text-stroke, which seems to be perfect for this, doesn’t just go outside the text, but also overlaps some of the inside, rendering it thinner.) We might come back to this one day. For now, we had to do what sometimes needs to be done to ship things: mentally relegate this idea from the crucial part of the cake to just its icing.

(Note that this solution would also not work on text that overlays images, since we wouldn’t know exactly what the background colour would be.)

Where things get complicated

Figuring out how to position one underline perfectly and test it on major browsers seemed like a nice, fast evening project. But this was my first month at Medium. I was still learning how complex our product is — and how carefully we hide that complexity from visitors.

Soon, I found out (or was reminded) that:

  • There are five places where our writers can use links: body text, H1, H2, image captions, and pull quotes. (That’s not even counting links in the UI, which I decided were out of scope for this project.)
  • Most of the links will be black on white, but some of them can be drawn on top of images.
  • There are, of course, displays with retina and non-retina pixels.
  • We adjust the sizes of fonts for tablets and mobile phones.
  • We fall back to a default system font for when we decide that our type doesn’t support all the characters in a given language.
  • Some people zoom in (or zoom out) their browsers.
  • Some people use uncommon browsers.
A screenshot from my test Medium story

I carried on, adding more and more complexity and magic numbers to my suggested change. And this is where the code reviewers started asking the truly hard and important questions: Is this too complicated? Will this make the CSS too heavy? How would we maintain it? What happens if this fails?

And so, during the following weeks, I worked on:

  • coming up with formulas rather than arbitrary values — in the event of changing the fonts or font sizes in the future, adjusting the underline positions would be much easier and faster,
  • limiting the browsers we use so that in the case of failure, the reader could see the default browser underlines (rather than not seeing them at all!),
  • simplifying the code so that while we might not get the perfect underlines, we could save both the bytes and future maintenance costs.
Some of my underline calculations

And then, it happens

There’s something anticlimatic about finishing a long code review. There are no fireworks, no champagne; GitHub doesn’t change the UI in any celebratory way to highlight your achievement. After 31 days of discussions, trials and errors, and trying many approaches, both of my great reviewers sent the magic four letters: LGTM.

I am writing this on the public version of Medium which doesn’t yet have the change in it, and I cringe whenever I see the underlines — but also, I am happy that whenever you will read this, they will, for the first time, look really good.

Before and after Chrome

There’s another reason long code reviews are anticlimatic: in the meantime, one usually starts and refocuses on newer projects. So much more work still needs to be done, so much work has already begun, and I am already excited for when I can tell you more about the upcoming projects from the typography wishlist.

One of them is improving how… LGTM looks (alongside, of course, all the other initialisms/acronyms). Stay tuned — and link away.

In the Medium typography series, we later covered hanging quotes, Whitespace, and pilcrows. Any typographical comments and suggestions? Send us feedback to typography@medium.com.

Many thanks to Daryl Koopersmith for inspiration to tackle this project and his guidance along the way.

Further reading or efforts from readers

From: https://medium.design/crafting-link-underlines-on-medium-7c03a9274f9

After Temporality

ribbonfarm.comThursday 02 February 2017Sarah Perry10 minute read
Time is weird. The alleged dimension of time has been under investigation by the physics police on charges of relativity weirdness and quantum weirdness. The math is hard, but you can see it in the ominous glint in the eyes of physicists who have had a couple of drinks.

Time is weird. The alleged dimension of time has been under investigation by the physics police on charges of relativity weirdness and quantum weirdness. The math is hard, but you can see it in the ominous glint in the eyes of physicists who have had a couple of drinks.

But subjective time is even more suspicious. Each observer possesses detailed and privileged access to a single entity’s experience of time (his own); however, this does not guarantee the ability to perceive one’s perceptions of time accurately, so as to report about it to the self or others. Access to the time perception of others is mediated by language and clever experimental designs. Unfortunately, the language of time is a zone of overload and squirrelly equivocation. Vyvyan Evans (2004) counts eight distinct meanings of the English noun “time,” each with different grammatical properties. Time can be a countable noun (“it happened three times”) or a mass noun (“some time ago”); agentic time (“time heals all wounds”) behaves like a proper noun, refusing definite and indefinite articles.

Perhaps we will get some purchase with chronesthesia, since Greek classical compounds are well-known for injecting rigor into the wayward vernacular. Chronesthesia is the sense of time – specifically, the ability to mentally project oneself into the future and the past, as in memory, planning, and fantasy (Tulving, 2002). It is sometimes called mental time travel. But already there is weirdness: why should the “time sense” be concerned with the imaginary, rather than the perception of time as it is actually experienced (duration, sequentiality, causality)?

Linear temporality (time as a sequential series of experiences) and chronesthesia (time as many simulations of past and future) are not conflicting models. Rather, they are deeply interlocking models that constantly construct each other. They are both illusions, though the way in which they are illusions is different. However, they are both highly functional, and the ways in which they are functional are complementary.

The Fabula of Linear Temporality

In folklore, the fabula is a stripped-down version of the events of a story in chronological order – a sort of minimal timeline of just the facts. This is in contrast to the way that the story is told (syuzhet), which may be nonlinear and told from the perspective of many characters, including unreliable narrators. Fabula corresponds to linear, sequential time; syuzhet corresponds to the chronesthetic experience.

Consider the fabula of the grocery store. You walk into the store and take a basket. Then you pick up items around the store and put them into the basket. Then you walk to the cashier, wait in line, and transfer your items to the checkout counter. The items are bagged; you pay for them, and carry them away.

This is a perfectly useful conception of grocery shopping. It functions as a script to help us use the grocery store, and it is articulable to others, in case we have some kind of grocery-store-related problem that we need to seek help with (e.g., is haggling permitted?).

The hidden side of the grocery store is that it is a zone of private fantasy and mental time travel. Perhaps there is a particular dish that you want to make. You imagine making the dish and the ingredients that go into it, informed by memories of past cooking experiences and recipe texts. You try to match what is desired to what is available. Products themselves may trigger memories and desires. Cupcakes? Raw kale? You may reach for fresh Brussels sprouts motivated by a fantasy of your future self eating roasted Brussels sprouts; you may draw your hand back, remembering that you let the last batch go bad; you may buy them anyway, thinking, “this time.” If they go bad anyway, then in a sense, your purchase was not of Brussels sprouts as food, but of Brussels sprouts as a scaffolding for a particular self-fantasy. Weird time threatens the thingness of things.

Now. Here you are at the cashier. You may rehearse the interaction, wonder if you will have to bag your own groceries, remember the times when cashiers made the joke of pretending to charge you for the cold bags you carry with you. Should you prepare a polite laugh? And then it’s over, and in a month you might not remember it at all. The experience will be folded into the grocery store script in long-term memory, if any trace of it remains.

I think it’s interesting how much mental time travel is involved in crushingly mundane activities. As I became a better cook, I noticed that when I got a food idea (a new dish or way of cooking), I would spend a great deal of time mentally simulating the process of slicing, sautéing, whisking, sprinkling, baking. The future simulations “reach back” into memory, collating scraps of memories of ingredient, flavor, and technique into a new whole. Mental simulations are rarely smooth: they hit obstacles that must be worked around, and particular segments must be re-simulated repeatedly. The fabula of a “recipe” reflects only a small portion of the reality of cooking. But it is a very useful condensation, providing a scaffolding for chronesthetic experience. And it is very easy to communicate.

Chronesthetic time

Linear timelines or scripts, along with memories in a richer sense, provide the basis for mental time travel. But linear timelines must themselves be abstracted (or extracted) from actual chronesthetic experience. Linear timelines are not simply available to perception; they must be constructed, with effort, out of the raw chronesthetic experience. The consensus social experience of time and the private experience of time mutually build each other.

Deeply Interlocking Time

“Deep Interlock and Ambiguity” is one of Christopher Alexander’s (2002) fundamental properties. Multiple elements “hook into” or grip each other, meeting in a zone of ambiguity that doesn’t clearly belong to either element. For example, a building surrounded by an arcade or gallery (in the architectural sense) deeply interlocks interior and exterior, meeting in a zone of ambiguity that is neither outdoors nor indoors. The shapes created by the columns of the arcade and the shapes created out of the space enclosed by the arcade seem to grip each other. The building becomes less separate from its surroundings.

Deep interlock can occur in ornaments, as in this detail of tile-work and brick, from the Tabriz Mosque (Alexander, 2002, at p. 198). The apricot-colored brick boundary has hook-shaped extensions that interlock with the botanical designs within and without, so that the black interior is deeply gripped. The hooks form spade shapes in each corner, in addition to having their own strong shape. All elements support each other; there is no separation, despite the fact that there is a strong boundary.

Detail in the 16th-century Tabriz Mosque

Time is deeply interlocking in this way: fingers reach into the past and the future, uniting in the zone of ambiguity formed by the chronesthetic being. Present experience takes its shape from flights into simulated future and past. The future takes its shape in part from the contents of simulated futures.

Past and future meet in a zone of ambiguity

Interestingly, there is evidence that remembering the past and imagining the future are not opposites, but expressions of a unified underlying capacity. Imagining past and future events seem to light up the same brain areas, and people with deficits in imagining the past (amnesia) tend to also have deficits in imagining and planning for the future (Schacter et al., 2008). Thus we can talk about constructing the past and “remembering” the future.

Iteration

Mental time travel to the future, or simulation, can be modeled as iterations on game theory problems, as in the Keynesian beauty contest. In the “guess 2/3 of the average” game, participants each choose a number between 0 and 100, inclusive; the object is to choose a number that is 2/3 of the average of the guesses of all participants.

A naive player might choose at random. Or he might observe that the maximum correct answer is 66 (if everyone chose 100). So, he thinks, everyone will guess below 66. If they do so at random, the correct answer would be around 34. So, iterating again, he thinks, everyone else knows this, so everyone will guess 34. In that case, the correct answer is about 21. The iteration continues down to the Nash equilibrium of 0. Extremely simplified simulations of the future, repeated – and, I should include, projecting those simulations onto the minds of other players – reveal a dominant strategy.

Simplified chronesthesia

Unfortunately, when games like this are played in real life (including more complex forms, such as poker), it is not the case that everyone plays the dominant strategy. 0 is usually incorrect in groups of real humans; they are more likely to average closer to 21. This is because real humans don’t iterate perfectly – and because humans know that other humans don’t iterate perfectly. Only if the answer to the game were common knowledge among the group would choosing zero be the correct answer.

The process of life – even simple life – reproducing itself in the course of evolution is analogous to game theory iteration, with similar results. The times at which migratory birds lay their eggs is a function of the history of thousands of generations of successful clutches. Organisms are a “best guess” at what will survive, reproduce, and flow into the future. Chronesthetic beings are a best guess at how to make best guesses.

There is some debate as to whether humans are the only species that imagines itself backwards and forwards in time. Nonlinguistic animals cannot report their experiences; however, scientists working tirelessly to annoy corvids and rats (among others) have produced some evidence of mental time travel in animals (Schacter et al., 2008). Corvids, such as scrub jays, cache food of varying perishability for months-long storage. Their ability to cache and relocate food speaks to a time sense – and they are apparently savvy enough to re-cache food in secret if another jay catches them caching the first time. The rat method is more invasive. Rats run mazes with electrodes sticking out of their brains, connected to particular neurons associated with places in the maze. These neurons seem to fire in the correct order during rat dreams, as if the rats were rehearsing in a sort of dream training camp. When running a familiar maze while awake, the place neurons fire before the rats arrive at the associated place, as if the rat were imagining the future course of events.

We just don’t know. But it’s premature to say that time is only deeply interlocked in human minds. It may be that simulation is a very old tool.

Phylogenetically, we find ourselves after temporality in the sequential sense; we are past or beyond the experience of time as a sequential series of moments and sense impressions.

Simulations, however, seem to have a strong relationship to events that actually occur on the consensus timeline. Some simulations seem to be about planning (as in simulating the interaction with a cashier at a grocery store). Other simulations seem to be a form of pleasurable escape, as in sexual fantasy or self-aggrandizing imaginings. People experiencing severe mental pain (as in depression) seem to fall out of time or get stuck in time; they demonstrate a reduced capacity to vividly imagine future (or even past) scenarios (Schacter et al., 2008). Time itself becomes poisoned by affect; the pleasure of reaching out and deeply interlocking with future and past is lost.

In the case of planning-type simulations judged to be positive, we are after temporality in that we seek after making these simulations come true on the consensus timeline. Relaxing notions of agency, we might say that the fantasies themselves are after temporality, auditioning to become real. It is not clear how intentional mental time travel is; a good portion of it can be classified as mind-wandering (Stawarczyk et al., 2011). The future and past can spring up to us, seemingly unbidden. Of course, there is no guarantee that a pleasing mental simulation will translate into a pleasing timeline reality.

The signs and symbols of language form a scaffolding for collective mental time travel, as in political/religious narratives of transformation and salvation. Common knowledge is powerful, as we have seen. Signs and symbols especially seem to be after temporality, in the sense of seeking to become real in the consensus timeline. The tactic of semiocide – “a situation in which signs and stories that are significant for someone are destroyed because of someone else’s malevolence or carelessness, thereby stealing a part of the former’s identity (Puura 2013)” – can shape both simulated and temporal futures. Fantasy colonized reality long ago. The war for the future plays out in the realm of fantasy and sign, as well as brick and blood.


References

Alexander, C. 2002. The phenomenon of life: The nature of order, book 1. Berkeley: Center for Environmental Structure.

Evans, V. 2004. How we conceptualise time: Language, meaning and temporal cognition. Essays in Arts and Sciences 33:13-44.

Puura, I., 2013. Nature in our memory. Sign Systems Studies 41:1:150-153.

Schacter, D.L., Addis, D.R. and Buckner, R.L., 2008. Episodic simulation of future eventsAnnals of the New York Academy of Sciences, 1124:1:39-60.

Stawarczyk, D., Majerus, S., Maj, M., Van der Linden, M. and D’Argembeau, A., 2011. Mind-wandering: phenomenology and function as assessed with a novel experience sampling methodActa psychologica, 136:3:370-381.

Tulving, E. 2002. Chronesthesia: Conscious awareness of subjective time. In D.T. Stuss & R.C. Knight (Eds.), Principles of frontal lobe function (pp. 311–325). New York: Oxford University Press.

From: https://www.ribbonfarm.com/2017/02/02/after-temporality/

RECONSIDER

m.signalvnoise.comWednesday 04 November 2015DHH13 minute read
About 12 years ago, I co-founded a startup called Basecamp: A simple project collaboration tool that helps people make progress together, sold on a monthly subscription. It took a part of some people’s work life and made it a little better.
#WEBSUMMIT2015

About 12 years ago, I co-founded a startup called Basecamp: A simple project collaboration tool that helps people make progress together, sold on a monthly subscription.

It took a part of some people’s work life and made it a little better. A little nicer than trying to manage a project over email or by stringing together a bunch of separate chat, file sharing, and task systems. Along the way it made for a comfortable business to own for my partner and me, and a great place to work for our employees.

That’s it.

It didn’t disrupt anything. It didn’t add any new members to the three-comma club. It was never a unicorn. Even worse: There are still, after all these years, less than fifty people working at Basecamp. We don’t even have a San Francisco satellite office!

I know what you’re thinking, right? BOOOORING. Why am I even listening to this guy? Isn’t this supposed to be a conference for the winners of game startup? Like people who’ve either already taken hundreds of million in venture capital or at least are aspiring to? Who the hell in their right mind would waste more than a decade toiling away at a company that doesn’t even have a pretense of an ambition for Eating The World™.

Well, the reason I’m here is to remind you that maybe, just maybe, you too have a nagging, gagging sense that the current atmosphere of disrupt-o-mania isn’t the only air a startup can breathe. That perhaps this zeal for disruption is not only crowding out other motives for doing a startup, but also can be downright poisonous for everyone here and the rest of the world.

Part of the problem seems to be that nobody these days is content to merely put their dent in the universe. No, they have to fucking own the universe. It’s not enough to be in the market, they have to dominate it. It’s not enough to serve customers, they have to capture them.

In fact, it’s hard to carry on a conversation with most startup people these days without getting inundated with odes to network effects and the valiance of deferring “monetization” until you find something everyone in the whole damn world wants to fixate their eyeballs on.

In this atmosphere, the term startup has been narrowed to describe the pursuit of total business domination. It’s turned into an obsession with unicorns and the properties of their “success”. A whole generation of people working with and for the internet enthralled by the prospect of being transformed into a mythical creature.

But who can blame them? This set of fairytale ideals are being reinforced at every turn.

Let’s start at the bottom: People who make lots of little bets on many potential unicorns have christened themselves angels. Angels? Really? You’ve plucked your self-serving moniker from the parables of a religion that specifically and explicitly had its head honcho throw the money men out of the temple and proclaim a rich man less likely to make it into heaven than a camel through a needle’s eye. Okay then!

“It is easier for a camel to go through the eye of a needle than for a rich man to enter the kingdom of God” — Matthew 19:23–26

And that’s just the first step of the pipeline. If you’re capable of stringing enough buzzwords about disruption and sufficient admiration for its holy verses, like software eating the world, and an appropriate yearning for the San Franciscan Mecca, you too can get to advance in this multi-level investment scheme.

Angels are merely the entry level in the holy trinity of startup money. Proceed along the illuminated path and you’ll quickly be granted an audience with the wise venture capitalists. And finally, if your hockey stick is strong, you’ll get to audition in front of the investment bankers who will weigh your ability to look shiny just long enough until the lock-up period on insiders selling shares is up.

And guess what these people call that final affirmation: A LIQUIDITY EVENT. The baptizing required to enter financial heaven. Subtle, isn’t it? Oh, and then, once you’ve Made It™, you get to be reborn an angel and the circle of divinity is complete. Hale-fucking-lujah!

You might think, dude, what do I care? I AM SPECIAL. I’m going to beat all the odds of the unicorn sausage factory and come out with my special horn. And who gives a shit about the evangelical vocabulary of financiers anyway? As long as they show me the money, I’ll call them Big Dollar Daddy if they want to. No skin off my back!

So first you take a lot of money from angels desperate to not miss out on the next big unicorn. Then you take an obscene amount of money from VCs to inflate your top-line growth, to entice the investment bankers that you might be worthy of foisting upon the public markets, eventually, or a suitable tech behemoth.

And every step along this scripted way will you accumulate more bosses. More people with “guidance” for you, about how you can juice the numbers long enough to make it someone else’s problem to keep the air castle in the sky inflated and rising. But of course it isn’t just guidance, once you take the money. It’s a debt owed, with all the nagging reciprocity that comes with it.

Now, if you truly want to become the next fifty-billion dollar Uber in another five years, I guess this game somehow makes sense in its own twisted logic. But it’s more than worth a few moments of your time to reconsider whether that’s really what you want. Or, even more accurately, whether an incredibly unlikely shot at that is what you want.

Don’t just accept this definition of “success” because that’s what everyone is cheering for at the moment. Yes, the chorus is loud, and that’s seductively alluring, but you don’t have to peel much lacquer off the surface to see that wood beneath might not be as strong as you’d imagine.

Let’s take a step back and examine how narrow this notion of success is.

First, ponder the question: Why are you here?

“Get Your Ticket To Join The World’s Largest Companies and Most Exciting Startups: It’s not just startups that come to Web Summit. Senior executives from the world’s leading companies will be joining to find out what the future holds and to meet the startups that are changing their industries.” — Web Summit invitation.

That’s one reason: You think you’d like to be mentioned in that headline: The World’s Largest Companies and Most Exciting Startups. In other words, you too would really like to try that unicorn horn out for size. And white, white, is totally your color. It’s meant to be.

Well, to then answer the question, “why are you here?”, you might as well make it literal. Why are you HERE. Dublin, Ireland, The European Union? Don’t you know that surely the fastest and probably the only way to join the uniclub is to rent a mattress in the shifty part of San Francisco where the rent is only $4,000/month?

Because while that area north of Silicon Valley is busy disrupting everything, it still hasn’t caught up with the basic disruption of geography. So if your angel or VC can’t drop by your overpriced office for a jam session, well, then you’re no good at all, are you?

The real question is why do you startup? I don’t actually believe that most people are solely motivated by fawning over the latest hockey stick phenomenon. Bedazzled, probably, but not solely motivated. I invite you to dig deeper and explore those motivations. As inspiration, here were some of mine when I got involved with Basecamp:

I wanted to work for myself. Walk to my own beat. Chart my own path. Call it like I saw it, and not worry about what dudes in suits thought of that. All the cliches of independence that sound so quaint until you have a board meeting questioning why you aren’t raising more, burning faster, and growing at supersonic speeds yesterday?!

Independence isn’t missed until it’s gone. And when it’s gone, in the sense of having money masters dictate YOUR INCREDIBLE JOURNEY, it’s gone in the vast majority of cases. Once the train is going choo-choo there’s no stopping, no getting off, until you either crash into the mountain side or reach the IPO station at lake liquidity.

I wanted to make a product and sell it directly to people who’d care about its quality. There’s an incredible connection possible when you align your financial motivations with the service of your users. It’s an entirely different category of work than if you’re simply trying to capture eyeballs and sell their attention, privacy, and dignity in bulk to the highest bidder.

I’m going to pull out another trite saying here: It feels like honest work. Simple, honest work. I make a good product, you pay me good money for it. We don’t even need big words like monetization strategy to describe that transaction because it is so plain and simple even my three year-old son can understand it.

I wanted to put down roots. Long term bonds with coworkers and customers and the product. Impossible to steer and guide with a VC timebomb ticking that can only be defused by a 10–100x return. The most satisfying working relationships I’ve enjoyed in my close to two decades work in the internet business have been those that lasted the longest.

We have customers of Basecamp that have been paying us for more than 11 years! I’ve worked with Jason Fried for 14, and a growing group of Basecamp employees for close to a decade.

I keep seeing obituaries of this kind of longevity: The modern work place owes you nothing! All relationships are just fleeting and temporary. There’s prestige in jumping around as much as possible. And I think, really? I don’t recognize that, I don’t accept that, there’s no natural law making this inevitable.

I wanted the best odds I could possibly get at attaining the tipping point of financial stability. In the abstract, economic sense, a 30% chance of making $3M is as good as a 3% chance of making $30M is as good as a 0.3% chance at making $300M. But in the concrete sense, you generally have to make your pick: Which coupon is the one for you?

The strategies employed to pursue the 30% for $3M are often in direct opposition to the strategies needed for a 0.3% shot at making $300M. Shooting for the stars and landing on the moon is not how Monday morning turns out.

I wanted a life beyond work. Hobbies, family, and intellectual stimulation and pursuits beyond Hacker News, what the next-next-next JavaScript framework looks like, and how we can optimize our signup funnel.

I wanted to embrace the constraints of a roughly 40-hour work week and feel good about it once it was over. Not constantly thinking I owed someone more of my precious twenties and thirties. I only get those decades once, shit if I’m going to sell them to someone for a bigger buck a later day.

These motives, for me, meant rejecting the definition of success proposed by the San Franciscan economic model of Get Big or GTFO. For us, at Basecamp, it meant starting up Basecamp as a side business. Patiently waiting over a year until it could pay our modest salaries before going full time on the venture. It meant slowly growing an audience, rather than attempting to buy it, in order to have someone to sell to.

By prevailing startup mythology, that meant we probably weren’t even ever really a startup! There were no plans for world domination, complete capture of market and customers. Certainly, there were none of the traditional milestones to celebrate. No series A funding. No IPO plans. No acquisitions.

Our definition of winning didn’t even include establishing that hallowed sanctity of the natural monopoly! We didn’t win by eradicating the competition. By sabotaging their rides, poaching their employees, or spending the most money in the shortest amount of time… We prospered in an AND world, not an OR world. We could succeed AND others could succeed.

All this may sound soft, like we have a lack of aspiration. I like to call it modest. Realistic. Achievable. It’s a designed experience and a deliberate pursuit that recognizes the extremely diminishing returns of life, love, and meaning beyond a certain level of financial success. In fact, not only diminishing, but negative returns for a lot of people.

I’ve talked to more than my fair share of entrepreneurs who won according to the traditional measures of success in the standard startup rule book. And the more we talked, the more we all realized that the trappings of a blow-out success weren’t nearly as high up the Maslovian pyramid of priorities as these other, more ephemeral, harder-to-quantify motivational gauges.

I guess one way of putting what I’m trying to say is this: There’s a vast conspiracy in the world of startups! (Yes, get your tinfoil hats out because Kansas is about to go bye-bye). People act in their own best interest! Especially those whose primary contribution is the capital they put forth. They will rationalize that pursuit as “the good of the community” without a shred of irony or introspection. Not even the most cartoonish, evil tycoon will think of themselves as anything but “doing what’s best”.

And every now and again, this self-interest shows itself in surprisingly revealing ways. Like when you hear angels brag about how YOU CANNOT KNOW WHICH BUSINESS IS GOING TO BE THE NEXT UNICORN. Thus, the rational play is to play as much as you possibly can. I find that a stunning acceptance of their own limited input in the process. Hey, shit, I don’t know which mud is going to stick to the wall, so please, for the sake of my six-pack of Rolexes, keep throwing!!

This whole conference is utterly unrepresentative when it comes to the business world at large! That’s why the mindfuck is so complete. You have a tiny minority of capital providers, their hang-arounds, and the client companies all vested in perpetuating a myth that you need them! That going into the cold, unknown world of business without their money in your mattress is a fool’s errand.

Don’t listen! They’ve convinced the world that San Francisco is its primary hope for progress and that while you should emulate it where you can, that emulation is going to be a shallow one. Best you send your hungry and your not-so-poor to our shores so we can give them a real shot at glory and world domination.

They’ve trained the media like obedient puppies to celebrate their process and worship their vocabulary. Oh, Series A! Cap tables! Vesting cliffs!

But in the end, they’re money lenders.

Morality pitted against the compound leverage of capital is often outmatched. Greed is a powerful motivator in itself but it gets accelerated when you’re serving that of others. Privacy for sale? No problem! Treating contractors like a repugnant automatron class of secondary citizens to which the company needs not show allegiance? PAR FOR COURSE.

Disrupt-o-mania fits the goals of this cabal perfectly. It’s a license to kill. Run fast and break societies.

Not all evil, naturally, but sucking a completely disproportionate amount of attention and light from the startup universe.

The distortion is exacerbated by the fact that people building profitable companies outside the sphere of the VC dominion have little systemic need to tell their story. VCs, on the other hand, need the continuous PR campaign to meet their recruiting goals. They can’t just bag a single win and be content henceforth.

The presentation of unicorns is as real as the face of a model on a magazine cover. Retouched to the nth degree, ever so carefully arranged, labored over for hours.

The web is the greatest entrepreneurial platform ever invented. Lowest barriers of entry, greatest human reach ever. I love the web. Permission-less, grand reach, diversity of implementation. Don’t believe this imaginary wall of access of money. It isn’t there.

Examine and interrogate your motivations, reject the money if you dare, and startup something useful. A dent in the universe is plenty.

Curb your ambition.

Live happily ever after.

See what we’re up to at Basecamp after twelve years with the brand-new version 3 we just launched. Also, if you enjoyed RECONSIDER, you’ll probably like my books REWORK and REMOTE as well.

From: https://m.signalvnoise.com/reconsider-41adf356857f

Power to the People: How One Unknown Group of Researchers Holds the Key to Using AI to Solve Real Human Problems

medium.comThursday 30 June 2016Greg Borenstein14 minute read
In the last few years, a series of spectacular research results have drawn the world’s attention to the field of machine learning. Excitement for AI hasn’t been so white hot intense since the onset of the last AI Winter.

In the last few years, a series of spectacular research results have drawn the world’s attention to the field of machine learning. Excitement for AI hasn’t been so white hot intense since the onset of the last AI Winter. But, despite the explosion of interest, most people are paying attention to the wrong research. And, in the process, they’re missing the work of a small set of researchers who are quietly building the foundation we’ll need to use machine learning to actually solve real human problems.

The current wave of AI excitement started with Hinton et al’s breakthrough success with deep convolutional neural networks on image classification. In a field that typical progresses by single percentage points, their results destroyed the previous state of the art. Hinton’s compatriots such as Yoshua Bengio, Yann LeCun, Andrew Ng and others quickly followed, using related techniques to set new benchmarks in speech recognition, face recognition, and a number of other research problems. The world of machine learning researchers rapidly became first aware of (and then profoundly obsessed by) this new suite of approaches, which was gathered together under the banner of Deep Learning.

And then, as Deep Learning gained more support from big companies like Google and Facebook, it started to produce achievements that were legible — and extremely impressive — to the wider public. AlphaGo won historic victories against the world’s leading Go players. IBM Watson dominated human players at Jeopardy on network TV. Smaller efforts like Neural Style Transfer and Deep Dream produced impressive visual memes that spread across social media.

All this success kindled a continuously burning flame of press attention and speculation that has drawn towards it executives, front-line technologists, and designers across a wide range of businesses. Venture capitalists are starting to talk about investing in an AI First World. Half of startups want to use these AI advances to build conversational UIs for their web and mobile apps and the other half want to use them to improve their Internet of Things products. I recently spoke at a conference put on by The Economist in Hong Kong and one of the major questions was about AI’s impact on marketing.

But now for a splash of cold water: while AI systems have made rapid progress, they are nowhere near being able to autonomously solve any substantive human problem. What they have become is powerful tools that could lead to radically better technology if, and only if, we successfully harness them for human use.

What’s stopping AI from being put to productive use in thousands of businesses around the world isn’t some new learning algorithm. It’s not the need for more programmers fluent in the mathematics of stochastic gradient descent and back propagation. It’s not even the need for more accessible software libraries. What’s needed for AI’s wide adoption is an understanding of how to build interfaces that put the power of these systems in the hands of their human users. What’s needed is a new hybrid design discipline, one whose practitioners understand AI systems well enough to know what affordances they offer for interaction and understand humans well enough to know how they might use, misuse, and abuse these affordances.

Look at history. It wasn’t some advance in cutting edge math or programming technique that produced the “killer app” for the personal computer. It was Dan Bricklin’s connection between the possibilities of programming and the working methods of real people that produced VisiCalc, the first “electronic spreadsheet”.

“I thought, if only we had a blackboard where I could erase a number and write a new number in, and everything would recalculate.” — Dan Bricklin, bored Harvard Business School Student

And hidden beneath the spectacle of Deep Learning’s much ballyhooed success, an entire field of research has quietly grown up that’s dedicated to exactly this problem of designing human interactions with machine learning systems. Interactive Machine Learning, as this small but exciting field is known, lives at the intersection of User Experience and Machine Learning research. And almost everyone reading this — almost anyone wondering how to incorporate AI into their own business or creative tool or software product or design practice — would be better off studying this field than maybe any other part of the AI landscape.

As Recurrent Neural Nets surpass Convolutional Neural Nets only to be outpaced by Deep Reinforcement Learning which in turn is edged out by the inevitable Next Thing in this incredibly fast moving field, the specifics of any given algorithm that temporarily holds the title of best performance on some metric or benchmark will fade in importance. What will stay important are the principles for designing systems that let humans use these learning systems to do things they care about.

Those principles are exactly the subject of Interactive Machine Learning. And if you’re a designer or a manager or a programmer working to use AI to make something for human use they’re the principles you’ll have to master.

To help get you started, I thought I’d summarize a few of the field’s results and provide links to some of its most interesting research. A couple of years ago, I was lucky enough to take an MIT Media Lab course on Interactive Machine Learning that was taught by Brad Knox, one of the most interesting practitioners in the field. Nearly all of what I’m going to describe here I learned from Knox or by studying the reading he assigned. (In fact, what follows is primarily a layperson’s summary of Knox’s paper, Power to the People: The Role of Humans in Interactive Machine Learning, written with Saleema Amershi, Maya Cakmak, and Todd Kulesza — all amongst IML’s leading lights.)

One additional note: unlike the wall-of-equations that make up most machine learning papers, the IML literature is profoundly inviting and largely friendly to non-experts. I encourage you to dive into the original papers wherever a particular topic piques your interest. I’ve gathered links to all of the papers in Knox’s syllabus here to make doing so especially convenient.

Use Active Learning to Get the Most Help from Humans

The core job of most machine learning systems is to generalize from sample data created by humans. The learning process starts with humans creating a bunch of labeled data: images annotated with the objects they depict, pictures of faces with the names of the people, speech recordings with an accurate transcript, etc. Then comes training. A machine learning algorithm processes all that human-labeled data. At the end of training the learning algorithm produces a classifier, essentially a small standalone program that can provide the right answer for new input that was not part of the human-labeled training data. That classifier is what you then deploy into the world to guess your users’ age, or recognize their friends’ faces, or or transcribe their speech when they talk to their phone.

The scarce resource in this equation is the human labor needed to label the training data in the first place.

Many impressive Deep Learning results come from domains where enormous amounts of labeled data is available because it was shared by a social network’s billion users or crawled from across the web. However, unless you’re Facebook or Google, you’ll likely find labeled data relevant to your problem somewhat more scarce, especially if you’re working in some new vertical that has its own jargon or behavior or data sources. Hence you’ll need to get your labels from your users. This entails building some kind of interface that shows them examples of the texts or images or other inputs you want to be able to classify and gets them to submit the correct labels.

But, again, human labor — particularly when it’s coming from your users — is a scarce resource. So, you’ll want to only ask your users to label the data that will improve your system’s results the most. Active Learning is the name for the field of machine learning that studies exactly this problem: how to find the samples for which a human label would help the system improve the most. Researchers have found a number of algorithmic approaches to this problem. These include techniques for finding the sample about which the system has the greatest uncertainty, detecting samples for which a label would cause the greatest change to the system’s results, selecting samples for which the system expects that its predictions would have the highest error, and others. Burr Settles’ excellent survey of Active Learning provides a great introduction to the field.

As a concrete example of these ideas, here’s a video demonstrating a hand gesture recognition system I built that uses Active Learning principles to request labels from the user when it sees a gesture for which it cannot make a clear prediction (details about this work here):

Don’t Treat the User as an “Oracle”

Active Learning researchers have shown success in producing higher accuracy classifiers with fewer labeled samples. Active Learning is a great way to pull the most learning out of the the labeling work you get your users to do.

However, from an interaction design perspective, Active Learning has a major downside: it puts the learning system in charge of the interaction rather than the human user. Active Learning researchers refer to the human who labels the samples they select as an “oracle”. Well, Interactive Machine Learning researchers have shown that humans don’t like being treated as an oracle.

Humans don’t like being told what to do by a robot. They enjoy interactions much more, and are willing to spend more time training the robot if they are in charge of the interaction.

In a 2010 paper, Designing Interactions for Robot Active Learners, Cakmak et al studied user perceptions of passive and active approaches to teaching a robot to recognize shapes. One option put the robot in charge. It would use Active Learning to determine the shape it wanted labeled next. Then it would point at the shape and the user would provide the answer. The other option put the users in charge, letting them select which examples to show the robot.

When the robot was in charge of the interaction, selecting which sample it wanted labeled in the Active Learning style, users found the robot’s stream of questions “imbalanced and annoying”. Users also reported a worse understanding of the state of the robot’s learning making them worse teachers.

In a software context, Guillory and Blimes found similar feelings while attempting to apply active learning to Netflix’s movie rating interface.

Choose Algorithms for Their Ability to Explain Classification Results

Imagine you have a persistent health problem that you need diagnosed. You have the choice of two AI systems you can use. System A has a 90% accuracy rate, the best available. It takes in your medical history, all your scans and other data and gives back a diagnosis. You can’t ask it any questions or find out how it arrived at that diagnosis. You just get back the latin name for your condition and a wikipedia link. System B has an 85% accuracy rate, substantially less than System A. System B takes all your medical data and also comes back with a diagnosis. But unlike System A it also tells you how it arrived at that diagnosis. Your blood pressure is past a certain threshold, you’re above a certain age, you have three of five factors from your family history, etc.

Which of these two systems would you choose?

There’s a cliche from marketing that half of the advertising budget is wasted but no one knows which half. Machine learning researchers have a related cliche: it’s easy to create a system that can be right 80% of the time, the hard part of figuring out which 80% is right. Users trust learning systems more when they can understand how they arrive at their decisions. And they are better able to correct and improve these systems when they can see the internals of their operation.

So, if we want to build systems that users trust and that we can rapidly improve, we should select algorithms not just for how often they produce the right answer, but for what hooks they provide for explaining their inner workings.

Some machine learning algorithms provide more of these types of affordances than others. For example, the neural networks currently pushing the state-of-the-art in accuracy on so many problems provide particularly few hooks for such explanations. They are basically big black boxes that spit out an answer (though some researchers are working on this problem). On the other hand, Random Decision Forests provide incredibly rich affordances for explaining classifications and building interactive controls of learning systems. You can figure out which variables were most important, the system’s confidence about each prediction, the proximity between any two samples, etc.

You wouldn’t select a database or web server or javascript framework simply because of its performance benchmarks. You’d look at the API and see how much it supported the interface you want to provide your users. Similarly, as designers of machine learning systems we should expect to have the ability to access the internal state of our classifiers in order to build richer, more interactive interfaces for our users.

Beyond our own design work on these systems, we want to empower our users themselves to improve and control the results they receive. Todd Kulesza, at Microsoft Research, has done extensive work on exactly this problem which he calls Explanatory Debugging. Kulesza’s work produces machine learning systems that explain their classification results. These explanations themselves then act as an interface through which users can provide feedback to improve and, importantly, personalize the results. His paper on Why-Oriented End-User Debugging of Naive Bayes Text Classification provides a powerful and concrete example of the idea.

Empowering Users to Create Their Own Classifiers

In conventional machine learning practice, engineers build classifiers, designers integrate them into interfaces, and then users interact with their results. The problem with this pattern is that it divorces the practice of machine learning from knowledge about the problem domain and the ability to evaluate the system’s results. Machine learning engineers or data scientists may understand the available algorithms and the statistical tests used to evaluate their results, but they don’t truly understand the input data and they can’t see problems in the results that would be obvious to their users.

At best this pattern results in an extremely slow iteration cycle. Machine learning engineers return to their users with each iteration of the system, slowly learning about the domain and making incremental improvements. In practice, this cumbersome cycle means that machine learning systems ship with problems that are obvious to end users or are simply too expensive to build for many real problems.

To escape this pattern we have to put the power to create classifiers directly in the hands of users. Now, no user wants to “create a classifier”. So, in order to give them this power we need to design interfaces that let them label samples, select features, and do all the other actions involved in a way that fits with their existing mental models and workflows.

When we figure out how to do this the results can be extremely powerful.

One of the most impressive experiments I’ve seen in Interactive Machine Learning is Saleema Amershi’s work on Facebook group invites, ReGroup: Interactive Machine Learning for On-Demand Group Creation in Social Networks.

The current Facebook event invite experience goes like this: you create a new event and go to invite friends. Facebook presents you with an alphabetical list of all of your hundreds of friends with a checkbox by each one. You look at this list in despair and then click the box to “invite all”. And hundreds of your friends get invites to events they’ll never be able to attend in a city where they don’t live.

The ReGroup system Amershi and her team put together improves on this dramatically. It starts you with the same list of names with checkboxes. But then when you check a name it treats that check as a positively labeled sample. And it treats any names you skipped as negatively labeled samples. It uses this data to train a classifier, treating profile data and social connections as the features. It computes a likelihood for each of your friends that you’ll check the box next to them and sorts the most likely ones to the top. The features that determine event revelance are relatively strong and simple — where people live, what social connections you have in common, how long ago you friended them etc. — the classifier’s results rapidly become useful.

This work is an incredibly elegant match between existing user interaction patterns and what’s needed to train a classifier.

Another great example is CueFlik, a project by Fogarty et al that improves web-based image search by letting users create rules that automatically group photos by their visual qualities. For example (as shown above), a user might search for “stereo” and then select just the “product photos” (those on a clean white background). CueFlick takes these examples and learns a classifier that can distinguish product photos from natural photos that users can later choose to apply to other searches beyond the initial search for “stereo”, for example to “cars” or “phones”.

Conclusion

When imagining a future shaped by AI, it’s easy to fall back on cultural tropes from sci-fi movies and literature, to think of The Terminator or 2001 or Her. But these visions reflect our anxieties about technology, gender, or the nature of humanity far more than the concrete realities of machine learning systems as we’re actually building them.

Instead of seeing Deep Learning’s revolutionary recent results as incremental steps towards these always receding sci-fi fantasies, imagine them as the powerful new engines of a thousand projects like ReGroup and CueFlik, projects that give us unprecedented abilities to understand and control our world. Machine learning has the potential to be a powerful tool for human empowerment, touching everything from how we shop to how we diagnose disease to how we communicate. To build these next thousand projects in a way that capitalizes on this potential we need to learn not just how to teach the machines to learn but how to put the results of that learning into the hands of people.

From: https://medium.com/@atduskgreg/power-to-the-people-how-one-unknown-group-of-researchers-holds-the-key-to-using-ai-to-solve-real-cc9e75b1f334

To see the articles in this edition, scan this code.

footer content, blah blah blah, Well, then good news! It's a suppository. Who am I making this out to? You lived before you met me?! So I really am important? How I feel when I'm drunk is correct? Wow! A superpowers drug you can just rub onto your skin? You'd think it would be something you'd have to freebase.This is the worst kind of discrimination: the kind against me! I barely knew Philip, but as a clergyman I have no problem telling his most intimate friends all about him. Bite my shiny metal ass.