text
stringlengths 44
950k
| meta
dict |
---|---|
Paul Buchheit: Can you see the wolves in your organization? - abstractbill
http://paulbuchheit.blogspot.com/2007/06/great-story-from-steve-yegge.html
======
Tichy
I didn't understand the story. So essentially, if too many Marshmallows start
showing up, something bad is going to happen? Seems to me I have read more
profound things on news.YC before.
~~~
budu3
I think the landowner represents the company founders. The wolves are the
suites who come in later. The marshmellows are things that creep into the
organization when it starts growing like cooperate BS, politics, poor internal
company tool etc. The story teller is someone who just left such an
organization and is attracting a cult following of young fresh faces
hackers/entrepreneurs/engineers/workers etc.
~~~
ajju
""I don't know what I'll do next, but I suspect it will involve teaching some
sheep a few basic fighting maneuvers, and also a fair amount of long-overdue
repair to our floating platform and our in-progress mansion."
I don't think Rauser is leaving the organization.
~~~
budu3
You could be right about him not leaving the organization. But I fail to
understand why he would stay in the organization since the wolves are still
there, he didn't say he found a way to fend them off, and they're still eating
his sheep.
~~~
paul
Wolves are a natural part of the ecosystem.
------
ralph
If you think the wolves are bad, look out for those that give them counsel.
<http://www.askoxford.com/firstnames/ralph?view=uk> :-)
------
budu3
So the wolves are those who play office politics?
------
sbraford
I'm guessing from his note at the end that the story was originally told by
one of his friends at MS?
~~~
ajju
No. How can someone at MS and someone at Google have the same wolf? It's
Google Kirkland I suspect.
~~~
sbraford
The same metaphorical wolf.
Not the exact. same. wolf.
------
JMiao
Emperor Palpatine.
~~~
JMiao
Just think about this. Sorry for the person who downmodded me, but I'm
serious.
|
{
"pile_set_name": "HackerNews"
}
|
Static single assignment for functional programmers (2011) - smcgivern
https://wingolog.org/archives/2011/07/12/static-single-assignment-for-functional-programmers
======
FullyFunctional
This is very well written. I want to mention two other IL that takes SSA even
closer to a functional representation: thinned gated single-assignment (TGSA)
form and the Value Dependence Graph.
I've used the former with pleasure as a lot of traditional passes become
trivially done by construction: global common subexpression elimination, copy
propagation, dead code elimination, and constant propagation. With very little
extra work you also get strength reduction, expression hoisting, and a host of
other things that I'm forgetting.
I never understood why this wasn't more popular as old-school SSA, as used by
even LLVM, is so terribly clunky and inefficient.
------
mdaniel
I always love reading his blog posts. I honestly don't know how one finds the
time to write that well at that length and still turn out guile et al., plus
whatever his "day job" is.
|
{
"pile_set_name": "HackerNews"
}
|
Can Line Arrays Form Cylindrical Waves? A Line Array Theory Q&A (2005) - brudgers
https://web.archive.org/web/20080925234554/http://www.meyersound.com/support/papers/line_array_theory.htm
======
slededit
Line arrays have revolutionized the quality of large PA systems. Although I do
miss the aesthetics of walls of woofers haphazzardly placed, and folded horn
subwoofers. Line arrays look much more elegant and don't give that same Rock &
Roll appearance.
------
seabrookmx
Neat!
I used to work on this[1] program that allows you to visualize and model SPL
(volume) and Frequency patterns with a given line-array setup. Unfortunately
it doesn't look like they released the Mac or Linux versions despite it being
a portable Qt app, but if you're a Windows user it's pretty fun to play with.
[1]:
[http://eaw.com/portfolio_page/resolution/](http://eaw.com/portfolio_page/resolution/)
~~~
Anechoic
Do you work at EAW now?
~~~
seabrookmx
Nope. I did work for them (LOUD Technolgies Inc. which is the parent company)
back in 2013-ish. I'm still in contact with a few people there though and the
same team still works on the Resolution software IIRC.
LOUD also owns Mackie, Ampeg, and Martin Audio.
------
kposehn
Interesting! There’s been a few advances in line arrays like D.B. Keele Jr’s
CBT arrays (continuous beam width transducers)
[http://www.xlrtechs.com/dbkeele.com/CBT.php](http://www.xlrtechs.com/dbkeele.com/CBT.php)
|
{
"pile_set_name": "HackerNews"
}
|
Co-Op cashier's breasts overcharged for fruit and veg - petercooper
http://www.theregister.co.uk/2010/11/09/jersey_coop/
======
reinhardt
Pics or it didn't happen ;)
|
{
"pile_set_name": "HackerNews"
}
|
Coca-Cola paid scientists to downplay how sweet drinks fueled the obesity crisis - alkhidr
https://www.dailymail.co.uk/health/article-8589497/Coca-Colas-work-scientists-low-point-history-public-health.html
======
Spooky23
Coca Cola has one of the most incredible branding and marketing operations on
earth.
I’d suggest doing the tourist trap in Atlanta if you find yourself there.
You’ll learn than if you have ever experienced a moment of happiness, it was
due to Coke. The idea that the company would find a way to convince people
that guzzling corn syrup was not that bad is completely unsurprising.
~~~
zachrose
Alternative perspective: World of Coke sucks. It’s one of those one-way guided
museums like the creationist museum in Kentucky, and contains nothing you
couldn’t get out of a Wikipedia article or two.
------
lma21
Apologies if this sounds like an ignorant question. Why isn't this common
knowledge? How hard is it to convince people that sugar is bad?
~~~
smileypete
Refined sugar is fine when used in part to fuel physical activity. Though it's
a complete disaster when consumed in excess when sedentary.
140 calories for a can of Coke is pretty modest; I just ate a third of a lemon
meringue containing 500 calories - most supermarket food is calorific muck,
cheap to manufacture yet tasty enough.
~~~
Nomentatus
This is absolutely contrary to the evidence. Sucrose is rare in plants in
nature (save in tiny quantities.) We aren't evolved to absorb it gradually, it
just floods in to the bloodstream; it behaves very differently than either of
it's components, or fructose or other vegetable and fruit sugars. It's so
evolutionarily weird that even bacteria and fungi generally aren't able to eat
it. Therefore, if you're on a FODMAP diet you are allowed to drink Mexican
Coca-Cola (has sucrose) but not American Coca-Cola (has liquid invert sugar.)
Sucrose won't cause SIBO yeast and bacteria growth because yeast and bacteria
haven't evolved to cope with it either.
Tests on athletes (some now decades old) show even moderate sucrose
consumption reduces athletic performance and health.
------
gazzini
I wonder if stuff like this has an outsized effect on the anti-maskers, anti-
vaxxers, or the general “anti-science” crowd.
I know many of these people, and they’re not actually against science at all —
they just believe that the “science” being preached at them is simply
political fodder or, at the very least, is intentionally biased to fit a
narrative.
Ideally, they’d be able to mentally allow for different authorities /
scientific foundations to exist, and 1 bad Apple shouldn’t ruin the whole
bunch. Unfortunately, “scientists == corrupt” is an easier (and more
provocative) conclusion when stuff like this happens.
~~~
tcbawo
Someone I know sent me a link to a 'peer-reviewed' paper 'proving' that
hydroxycloroquine cut mortality rate 'in half' from the Henry Ford Health
System in Detroit. However, a cursory look at the data shows a large median
age discrepancy between the populations receiving treatment -- the no HCQ
group was significantly younger. The vast majority of people have outsourced
their opinions to 'trusted sources'. And a sizable fraction suffer from
massive confirmation bias.
------
bfabio
I find easy to believe this actually happened, but why are we treating the
Daily Mail as a legitimate source?
~~~
atian
I like to think that our standards drop as productivity fizzles out. We have
the luxury of entertaining more without being penalized for it.
------
Feolkin
You know, I feel like there's a connection between companies paying off
scientists and growing anti-science sentiment. Am I wrong?
~~~
arvinsim
There used to be a weight behind statements that are scientific. Most people
would trust them without going into debates.
I agree that corporate lobbying and academic corruption has certainly eroded
that trust.
~~~
throwaway8941
When was that? I don't live in the US, but just the other day I finished 'The
Emperor of All Maladies' that someone recommended here. If I understood the
book correctly, the link between tobacco usage and lung cancer was established
pretty rapidly, but it took a very long time (and decisive political action
resulting in massive propaganda campaign) to convince the public.
------
foxyv
The worst part is, once it does it's damage, it takes years of hard work to
undo the lasting insulin resistance.
------
mytailorisrich
In fairness, if people drink this everyday, as some do, it's hardly Coca
Cola's fault.
It's quite obvious that obesity is caused by a bad diet. People should watch
their diet. The rest is just politics: politicians don't want to blame people
and companies don't want to be used as scapegoats.
We also need to accept that the vast majority of people are neither victims
nor stupid. People do make choices knowing that they are not good for their
health. I think that at least some of the anti-experts trend we see is a
reaction against being told what to do on the ground that people cannot decide
for themselves.
~~~
tiew9Vii
It's Coca-Cola's fault paying scientists to downplay how sweet drinks are
related to obesity.
It's Coca-Cola's fault not using morbidly obese people in their marketing
campaigns and instead choosing physically fit, attractive people, those who
realistically will not be drinking much Coca-Cola to get their gym bods
because of the sugar it has.
A 375ml can of coke is 161 calories / 40g of sugar. Two 375ml cans a day is
the equivalent of a small meal calorie wise with no nutritional benefit, only
80g of sugar.
People are essentially having multiple meals a day on top of their solid food
by drinking soft drinks and unknowingly or through ignorance are in massive
calorie surplus.
Anyone who goes to a gym and in good shape knows it's 80% food 20% time
working out. Your food is more important than the workout.
It's not in Coca-Cola's interest to educate people advertising CocalCola as a
very poor liquid meal consisting of only sugar.
~~~
mytailorisrich
People know that their diet is what make them obese. People know how much
sugar there is in Coke and soft drinks.
It is frankly ridiculous to claim that in 2020 people are ignorant or misled.
~~~
ageitgey
There's a big difference between what you know as an abstract fact vs. what
seems normal in your society. By definition, normal people do what is normal
in their society.
I've lived in the southeastern US, various places in California, China and
Europe. The type of consumption that is considered normal is so vastly
different in all those places.
In the southeastern US, your whole environment is telling you that eating
enormous meals, drinking sugary drinks, etc, is totally normal. If you are
responsible, maybe you could cut back a little to "watch your health". But you
are starting from an unhealthy default setting and you have to dig yourself
out of a pit to get back to 'healthy'. This includes Atlanta, the home of Coke
(where I went to college and lived several years).
In southern California or even more so in Europe, the norms are vastly
different. Meals are much smaller and sugary drinks aren't assumed. Healthier
options are available nearly everywhere.
But in Europe, things go much further than even the healthiest parts of the
US. The very composition of basic foods is different - brand name foods are
less sugary and contain less preservatives than the same brands in the US,
staple foods like bread are less shelf stable but contain less additives, and
so on. Things like Kraft Mac & Cheese, Snickers bars and Doritos chips are
literally illegal in Europe with their original US formulations and are
produced with tweaked ingredients. It's in the very DNA of society to default
to a more healthy lifestyle. Sure, it's possible to be unhealthy but the
default setting of your environment is much healthier so more people are
healthy.
The point is that people operate at a default setting that seems normal for
their society. Some will be healthy minded and will outperform the norm for
their area, but not everyone will. If you want a healthy population, you need
a healthy norm and no amount of telling people to take responsibility is going
to fix a society if literally every food easily available is bad for you.
There is just a mountain of public health evidence that shows this to be true.
|
{
"pile_set_name": "HackerNews"
}
|
Ask HN: How is your code/build/test/deploy flow organized? - tuyguntn
We use git for source code management, but thinking about which method of versioning is easier and more scalable in long-run. we can put tag for each release, but when we need to fix older version say v0.1, we cannot easily checkout, fix then commit. Instead we are going to use branch for major versions, v0.1, v1.0, v1.1, v2.0 and etc, tags for minor versions inside each branch, v0.1.1, v0.1.2,..<p>We use jenkins for builds, but how can we easily organize "build and deploy version X into production" things (assuming that we do not have db migration overhead)<p>Would be awesome if you can give some details about pros and cons of your current flow, or maybe best practice for creating such flow.
======
drakonka
We currently use a custom CI system. We produce dozens of builds per day in
various forms. A "build" to us has different definitions depending on who it
is targeted to. They include things like:
* Just building binaries/code
* A data build
* A build that content creators work with (built binaries synced to PC, matching data synced via P4 which they then build locally)
* A full files/package/disc build with included data and binaries
Coders and content creators have different branches to keep some isolation
between the two and avoid them impacting each other too much. New binaries are
built with every submission to source control as a verification step on the
coder branch, but most are only deployed once an hour. Some sets of binaries
are deployed at all times, those that are required to run our automated tests
on each CL.
The content creator branch always work with a pre-approved set of binaries.
When they submit new data it goes through a verification step where all data
is built and autotests run, then that data CL shows up as "good" for them - so
they know that data is safe for everyone to sync and work with.
We create full packages for QA to test a few times throughout the day.
The general build steps in a CL on the code branch goes through are:
* Build code
* Build data
* Run autotests
* Once an hour deploy new binaries to sync
The general build steps a CL on the content branch goes through are:
* Build data (no need to build code here as we just sync the pre-approved binaries)
* Run autotests
* Deploy approved build (or mark a CL as "good")
That's the general gist of it, anyway.
------
brudgers
To me, one workflow - either branches or tags - would seem simpler and make
describing and tooling the process simpler. Again, to me, since git is
primarily about branching, I'd probably go with branching. Then again,
branches with little or no reason to exist beyond semantics wouldn't bother me
as much as it might bother someone else.
Good luck.
------
buildops
We use IncrediBuild - it speeds up our development and CI. C++ dev on Visual
Studio with some teams running C# and CI into Jenkins. The C# team has more
automated unit tests with MSTest, which is where they shine
|
{
"pile_set_name": "HackerNews"
}
|
Here's How Facebook Can Avoid Playing the Part of the Colonialist - mathattack
http://fortune.com/2016/02/11/facebook-colonialism/
======
nighthawk24
"Of course Facebook wasn’t marching in with gun in hand, seeking to subjugate
all who stood before it — it was only partnering with local mobile operators
to give people free access to certain online services.However, there are
parallels worth drawing."
Regarding the parallels, neither did the British come marching in with gun in
hand, seeking to subjugate all who stood before it. (British East India
Company)[https://en.wikipedia.org/w/index.php?title=British_East_Indi...](https://en.wikipedia.org/w/index.php?title=British_East_India_Company&redirect=no)
(Foothold in
India)[https://en.wikipedia.org/w/index.php?title=East_India_Compan...](https://en.wikipedia.org/w/index.php?title=East_India_Company&redirect=no#Foothold_in_India)
|
{
"pile_set_name": "HackerNews"
}
|
1960s Visionary Film “The Home of the Future: Year AD 1999” With Wink Martindale - DrScump
https://www.youtube.com/watch?v=0RRxqg4G-G4
======
DrScump
A retrospective interview with Wink Martindale:
[https://vimeo.com/15063245](https://vimeo.com/15063245)
|
{
"pile_set_name": "HackerNews"
}
|
Ask HN: Accepted job offer but no contract yet - throwaway4443
I applied to various companies and accepted an offer from one of them. It's been one week and no contract in sight. Recruiter keeps saying "tomorrow". This company is located in Germany.<p>I excused myself from most of other hiring processes but kept a full backup options that I could postpone (e.g. schedule next interviews a week later, etc) but these other processes are getting to a point where they might make me offers too.<p>I don't change jobs often so, when everybody is saying the start date is "immediately" but a contract cant be written, I'm getting a little bit suspicious this is a tactic to hold me out of the job market.<p>Anyone ever faced this?
======
byoung2
I'm in the US so contracts aren't common but I did have a company stall before
making an offer...they kept calling me back for follow-up interviews and the
manager kept saying he wanted to hire me but needed higher-ups to sign off.
This went on for 4 weeks but in the meantime I kept interviewing at other
companies and eventually accepted another position. The next day I got my
offer letter but it was too late for them.
------
JSeymourATL
> Recruiter keeps saying "tomorrow". This company is located in Germany.
German companies are notoriously slower and more bureaucratic than their US
counterparts. The bottleneck is likely deep within the HR group
administration.
While you wait for them to submit a contract-- it's in your best interest to
keep exploring other opportunities/offers.
------
vectorEQ
i would say, don't wait for things if you need a job fast. It can happen that
in the end you don't get a job, but on the other hand, some companies have
lengthy procedures / alot of signatures to gather before they can commit to
such a thing. Totally depends. I have switched jobs quite a lot, and i've
never waited for these things. For example, if another company does want to
give you a contract, you can use this as leverage at the one you prefer to
have them speed up and match the offer etc. so it's always positive and useful
to have a few things going at the same time, even if it's only to have as
leverage.
also: Recruiters are bitches, some of them get paid just to supply a resume to
companies, they often don't seem to care about the individual involved so much
as you would hope or expect.
Maybe just tell this recruiter (bluff) 'look, you arent the only one making
offers, time is running out, you are postponing so much that i might need to
go into another offter' or something along those lines.
|
{
"pile_set_name": "HackerNews"
}
|
TCP Traceroute - Anon84
http://www.catonmat.net/blog/tcp-traceroute/
======
majke
I'm not sure about `tcptraceroute` but `traceroute -T` seems to be using SYN
packets with tweaked TTL value [1].
There is an even more interesting technique - injecting packets with modified
TTL into a currently running and valid TCP/IP connection. This often can be
used to detect network setup behind NAT on the server side. I think the most
famous attempt was 0trace by Michal Zalewski [2] but there are many
implementations. Oh boy, one day I even created an Nmap script to do this [3],
those were the days :)
[1] <http://linux.die.net/man/8/traceroute>
[2] <http://seclists.org/fulldisclosure/2007/Jan/145>
[3] <http://seclists.org/nmap-dev/2007/q3/186>
~~~
X-Istence
And that is why you should scrub all incoming packets and make sure that they
don't have a TTL that is too low ...
PF's scrub reassemble tcp for example will make sure that the TTL can't be
lowered in an already existing connection ...
[1] <http://www.openbsd.gr/faq/pf/scrub.html>
~~~
majke
Alternatively - watch out what ICMP errors are outgoing from your servers.
(But don't block all ICMP or you'll screw up MTU discovery [1])
If you're worried about exposing your internal setup you also need to filter
packets with unusual IP flags such as "Record Route" [2] or "Internet
Timestamp" [3]. Fortunately scrub should notice that.
[1] <http://en.wikipedia.org/wiki/Path_MTU_Discovery>
[2] <http://tools.ietf.org/html/rfc791#page-18>
[3] <http://tools.ietf.org/html/rfc791#page-22>
If you have a source NAT on your servers, it's likely that an attacker can
easily detect that, no matter ICMP / TTL / IP options setting. Running p0f on
established connections will show your setup, for example:
<http://seclists.org/nmap-dev/2007/q1/175>
~~~
X-Istence
It becomes much harder to identify NAT when you also scrub out going packets,
and change the packet IP identification field with a random-id.
Other things that help as well is to not load balance the TCP/IP connection,
but rather terminate on the first machine, then pass the connection through
using something like varnish/haproxy.
------
mct
Hi. Author of the original tcptraceroute here.
I first wrote tcptraceroute is 2001, while I was working for a large Mid-
Atlantic NSP. I'd routinely see customers (and our NOC staff) reporting that
traceroutes to many popular sites (such as ebay.com, microsoft.com, etc) were
broken, and wondering if their connectivity was somehow defective. I would
have to explain (again) how traceroute worked, and how the probe packets it
uses are now (sadly) routinely firewalled, thus breaking our diagnostic tools.
At one point, the idea struck me that there was no reason we couldn't use any
type of packet as a probe packet for traceroute -- including packets that
firewalls found permissible. After a night or two hacking with libnet and
libpcap I had a working implementation. Suddenly being able to traceroute
through those firewalls and see the internal network of those large networks
felt magical, like having X-Ray vision. :-) Over time, I added other features
that provided even greater visibility -- such as the ability to detect when
some NAT/load-balancers aren't doing as good a job as they could be, and thus
revealing the internal IP address you're being NAT'd to. (Have a peek at
<http://michael.toren.net/code/tcptraceroute/examples.txt> for some example
output.)
I haven't done a great job of investing time into tcptraceroute over the past
few years. The code is still functional and hasn't suffered from any bitrot,
but the last "beta" release was "1.5beta7", back in 2006. There's no reason it
shouldn't have been released as "1.6", I just never got around to officially
"releasing" it. There are also a bunch of additional features I've
contemplated over the years that I'd really like to add. This thread has
definitely invigorated me to put more energy into the project soon, which I'm
grateful for. :-)
Tangentially, one of the annoying things about "traceroute" programs in
general is that the name is very overloaded, with many unrelated projects over
the years calling themselves "traceroute". Further complicating things, a
brand new project called "traceroute" was started a few years ago with the
stated goal of only supporting Linux, and wishing to be the one, true
traceroute in all Linux distributions. That project also implemented the
ability to use TCP probe packets, which is awesome. I'm glad to see that there
are other implementations out there. But sadly, they're also distributing a
shell script called "tcptraceroute" that invokes their traceroute program with
the flag to set the probe type to TCP. In Debian, if you have the "traceroute"
package installed but not the "tcptraceroute" package, typing "tcptraceroute"
will invoke their traceroute program. If you instead have both the
"traceroute" and "tcptraceroute" packages installed, typing "tcptraceroute"
will invoke my traceroute program. As the two programs differ in other ways,
this can be really annoying, and lead to the type of confusion that can be
seen elsewhere in this thread. I've also received a number of messages from
people asking for help with "tcptraceroute", only to have us both later figure
out that they're really running the other "tcptraceroute" program, which is
very frustrating for both of us.
I'm not sure what the best course of action is. I strongly believe in, and
advocate, open source. I want other people to be able to look at my code,
learn from it, borrow useful parts, and distribute changes. And yet, I really
don't want someone distributing a program that has the same proper noun I
chose years earlier for my own project. This is something I've thought about
off and on over the years as I consider releasing (and naming) other open
source projects, but don't yet have a good answer. I suppose one solution is
to utilize trademark law, but that's something I have very mixed feelings
about.
This turned into a much longer post than I intended :-), but if anyone has any
thoughts, I'd love to hear them! My contact information can be found in my
profile.
Thanks!
------
dfc
I am not sure how new this functionality is. Based off the changelog
tcptraceroute has been a shell wrapper since 2007.
2007-07-19 Dmitry Butskoy <Dmitry@Butskoy.name> - 2.0.6
* Add tcptraceroute(8) shell wrapper
2007-07-16 Dmitry Butskoy <Dmitry@Butskoy.name> - 2.0.5
* New implementation of tcp method ("-T"), using
half-open technique. The old implementation module
renamed to "tcpconn" ("-M tcpconn").
~~~
pkrumins
Ah sorry, I should have said relatively new functionality. Also I should have
mentioned that there are different traceroute implementations. I'll update my
post.
Edit: updated the post.
~~~
dfc
I think route9's firewalk did this back when I had dial up.
~~~
pkrumins
Yeah. Well, I actually wanted to write about tcptraceroute program originally,
but then discovered that traceroute on one of my systems had the -T flag. So I
thought, wow, traceroute now can use the tcp protocol. That's why I wrote "was
added recently" because I hadn't seen the -T flag before. It turns out that
that system had the <http://traceroute.sourceforge.net/>, while my other
system had the ftp://ftp.ee.lbl.gov/traceroute-1.4a12.tar.gz, which doesn't
have -T.
------
muppetman
"Was added just recently" - not sure about being added to the traceroute
program, but I've been using tcptraceroute for years.
It is very handy.
~~~
pkrumins
I updated the post. It wasn't really added just recently. Here's why I said
that: <http://news.ycombinator.com/item?id=5276983>
------
tptacek
Hasn't Windows tracert always used TCP?
~~~
jburnat
No, the difference is that the Windows version defaults to ICMP, while the
Linux one to UDP.
|
{
"pile_set_name": "HackerNews"
}
|
First rule of ant traffic: no overtaking - chaostheory
http://www.technologyreview.com/blog/arxiv/23176/
======
zealog
While it is interesting that there is "no overtaking" in the ant world, I
don't think this really has any real bearing on humans and driving congestion
(as suggested by the articles subheading). Having sat in long, single lane
stop and go traffic I can anecdotally suggest it's irrelevant.
More importantly, there is a big difference between personal locomotion and
commanding a vehicle. For example, the difference between my walking pace and
that of an Olympic champion sprinter is not THAT great. However, the
difference between my freeway merge speed and that of my mom is an order of
magnitude. Add in fixed and inflexible travel lanes, the ability to speed up
and slow down consciously and unconsciously, traveling at speeds that will
knowingly cause death in the event of a problem, etc. and you have a lot more
variance with higher stakes to the participants.
I'm not saying it isn't a fascinating bit of trivia, but I think Tom
Vanderbilt's book "Traffic" is apt to lead to more insight than this
particular nugget of knowledge.
~~~
tlb
The real difference is that stopping distance for an ant is a fraction of an
ant length, but for a car it's several car lengths. So for cars, most of the
space is following distance For ants, most of the space is ants. Car density
decreases as the square of speed, while ant density is constant. That's not
because they have better algorithms but because they're in a different realm
of physics.
------
narag
(Warning: improvised English, this has been very hard for me, sorry)
There is a annoying effect that I can see in my city in most of the jams that
I suffer. It's about the highway exits. The circular highway is usually three
lanes wide. It has exits, many of which are slow (entries to the city). The
exits take the form of an additional lane to the right. When the "contact
zone" of the rightmost lane with the additional lane is long and the traffic
is middle or heavy, the jam is a sure thing.
Why? Even if the exit has a reasonable speed, there are drivers that wait
until the last moment to take the additional lane. There are so many that most
of the vehicles that reach the exit are of this kind. If you are kind enough
to take the additional lane as soon as it is possible, you get caught in a
trap for ten to twenty minutes.
It's a moral dilemma. Either I am a good citizen and agree to be victimized
for a quarter of an hour daily or I do the same to others. Please, don't ask
me what I do.
It also creates a jam for the cars that aren't taking the exit, because the
late changes of lane affect them. Not only the right lane, but also the middle
one are filled with "late-exit-takers". And the slow cars have to change lanes
to avoid the stopped cars waiting to exit.
I've observed that there is a disposition of lanes that prevents the problem:
separating two lanes to exit and two lanes to follow on the highway, making
the exit lanes long enough (to have a "buffer" for the slowness of the exit)
and the fork instant, not a long "contact zone". But this disposition is
seldon used and I haven't really made the experiment to be sure :-)
About the article: I don't think the ants are similar to cars at all. Drivers
have freedom to behave differently, some are in a real hurry and we have
different speed of reaction (to use the holes in the exit to wait to the last
moment).
~~~
eru
Where do you live?
~~~
narag
Madrid, Spain.
------
ams6110
Ants are not trying to text their friends, watch their GPS, and change the
radio station while they are in formation.
------
biotech
This article does make a good point. It has been my experience that highway
traffic can move more quickly when people do not change lanes. I have seen
this happen a couple times in very snowy conditions; the traffic density was
such that it should have been stop & go traffic; however, due to the large
amount of snow between each lane, everyone pretty much stayed in their lane.
This made traffic go at a steady 20-25 mph. (Note that the set of
circumstances in which snow can cause traffic to _speed up_ is very rare).
The main problem that this article does not address is bottlenecks. Even with
a perfect system, it is impossible to keep a constant speed if there is a
large influx of traffic at a certain point or the road narrows to fewer lanes.
------
beholden
The article makes several good points but I'd have to disagree with the no
overtaking rule. Taking the human out of the equation, vis a vis 'IRobot'
style, where road travel is managed by computer seems like a more viable
option in the long term. There have been several advances in this technology,
from MIT especially (Google 'Self driving car', do you really expect me to
divert from this tab for you?)
I work in Highway Engineering and Management, the company i work for manages
quite a few of the UK's motorways. I wonder if I could wing this point into a
meeting. Another Gem HN, thanks.
------
ams6110
The more I think about this the more I think the comparison is not relevant.
Ants are all going from point A (the colony) to point B (the food). Additional
ants are not joining the line midway, nor are any splitting off to some other
destination, nor do different streams of ants cross one another's paths.
If you had a one-way one-lane road with no cross streets, merges, or exits,
MAYBE the situation would be comparable; in the real world, I don't see a
whole lot of relevance.
------
carl_
In "Critical Mass" by Philip Ball chapter 7 covers "The inexorable dynamics of
traffic" which is highly worth a read if this subject matter interests you.
There's also some very interesting sections on crowd dynamics and route
finding.
------
10ren
Ants can form platoons (groups moving at constant speed) because they have the
same destination.
~~~
dsil
We have buses, which do help a good deal.
|
{
"pile_set_name": "HackerNews"
}
|
The Miracle of Vitamin D: Sound Science, or Hype? - robg
http://well.blogs.nytimes.com/2010/02/01/the-miracle-of-vitamin-d-sound-science-or-hype/?8dpc
======
ggchappell
I'm glad this was posted, although I do find the article to be a little
confused.
Certainly, what the writer is getting at has been the Achilles heel of the
whole take-your-vitamins movement: just because some chemical is good to have
in your body, does not mean that taking that chemical in pill form is a
helpful thing. While she does not mention it, this idea seems to be
particularly applicable to vitamin C. It is certainly an important part of
general health. However, if I understand the research correctly, beyond the
small amount necessary to prevent scurvy, studies have shown no benefits at
all to taking vitamin C supplements (corrections are welcome!).
So: the first few paragraphs suggest that, while high vitamin D levels seem to
correlate with good health, that does not mean that supplements help.
Correlation does not equal causation, etc.
That is all fine. However, she then goes on to cite studies, _all_ of which
were concerned not with vitamin D levels in the body, but with
supplementation.
Furthermore, she rather strangely says that the Women's Health Initiative
study "found no overall benefit", and yet also found that vitamin D + calcium
supplementation correlated with reduced incidence of hip fracture. How is that
not a benefit?
In any case, I welcome the direction that research seems to be going. Whenever
there is an easy way to improve the health of large numbers of people, it is a
good idea to find it.
------
MikeCapone
According to what I've read, it's not too surprising that some of these
studies found no effect: a 400 UI daily dose is too small to make a
difference, especially if not taken in a gelcap (vit D is fat soluble).
|
{
"pile_set_name": "HackerNews"
}
|
Poor kids who do things right don’t do better than rich kids who do things wrong - jordanpg
http://www.washingtonpost.com/blogs/wonkblog/wp/2014/10/18/poor-kids-who-do-everything-right-dont-do-better-than-rich-kids-who-do-everything-wrong/
======
BryanBigs
Another headline that's actually backed up by the data shown in the article
would be: High performing poor college kids smoke rich dropouts later in life
Since 19% of rich dropouts make the top 2 quintiles of income at 40, while 41%
of poor college grads hit those categories, the headline is crap.
~~~
sitkaroot
Is it so bad? The "top 2 quintiles" is the top 40%. They want to make a
statement about the extremes.
I think the numbers are interesting. It looks like the rich kids' distribution
is bimodal with a spike at the top quintile but centered on the 2nd/3rd
lowest, and the poor kids' is unimodal centered at the middle quintile.
|
{
"pile_set_name": "HackerNews"
}
|
Router hacker suspect arrested at Luton Airport - colinprince
http://www.bbc.com/news/technology-37510502
======
andreicon
maybe they should secure their f __*ing routers!
|
{
"pile_set_name": "HackerNews"
}
|
The Economics of Programming Languages - bootload
http://www.welton.it/articles/programming_language_economics.html
======
davidw
Ok, I admit it, I'm a programming language geek:-)
|
{
"pile_set_name": "HackerNews"
}
|
Ask HN: Google App engine pricing vs. AWS? - petervandijck
Roughly, how does App engine pricing compare to AWS. Of course, App engine is free at first (although it's hard for me to get a sense for how much you get for free).<p>In particular: I have a forum app running on AWS, costs me about 150$/month, getting about 500,000 pageviews. Could I run this for free on Google Appengine?
======
malandrew
Besides pricing, be sure to keep in mind some of the technical limitations
with Google AppEngine. Here are two serious ones:
1) Last I checked there was still no naked domain support. So if you plan on
defaulting to <http://mystartup.com/> instead of <http://www.mystartup.com/>,
you might want to check if this is possible yet.
2) HTTPS support is limited to <https://mystartup.appspot.com/>. You CANNOT
use <https://www.mystartup.com/> last time I check. This is due to a technical
limitation in SSL certificates that I'm not sure was considered at the time
that Google designed the AppEngine architecture. There are ways around this,
but it's messy and inelegant.
One of the great benefits of AppEngine is using it as a cache server, because
if I remember correctly this is either free or really cheap. I forget the
exact details of how we worked this out at my last startup, but it's worth
looking into because using Google's global datacenters for caching is much
much cheaper than paying for AWS bandwidth and you also get much lower latency
times.
~~~
dotBen
I heard flat files and binaries were intentionally bottlenecked to deter
people using app engine as a binary cache/edge cache.
Do you have data/etc on the performance?
~~~
malandrew
dotBen,
I will contact the devs I used to work with and ask them about that and let
you know.
------
HowardRoark
<http://code.google.com/appengine/docs/quotas.html>
Assuming that is your daily pageviews, you could run that app for free on
Google Appengine.
The only down side to Appengine in my experience is the cost of rewriting the
app and migration, and performance issues (and cold start up times) related to
Appengine's architecture.
------
pwim
You will need to rewrite your forum for AppEngine (or find some forum software
written specifically for AppEngine).
AWS's main selling point is that you can programmaticly add and remove
instances. If you are not using that, I'd consider migrating to a VPS such as
slicehost or linode, where you should be able to run your forum as is.
|
{
"pile_set_name": "HackerNews"
}
|
How to Survive the Next Wave of Technology Extinction - digital55
http://www.nytimes.com/2014/02/13/technology/personaltech/how-to-survive-the-next-wave-of-technology-extinction.html?ref=technology
======
mark_l_watson
Good article, great advice. Scary how closely I follow that advice, varying
only in sometimes using a Ubuntu Linux laptop. I also chose a Samsung Galaxy 3
rather than an iPhone because I already have an iPad, and also having an
Android device is fun.
All great points about Amazon, Dropbox, and Evernote - they all deserve the
money I pay them.
|
{
"pile_set_name": "HackerNews"
}
|
The real problem with git. - d0m
Sorry I have no blog. The commands make no sense and are hard to remember. It would be so quick and so easy to remember:
- git stage add file
- git stage remove file
- git stage list.<p>Sorry, I can't give you the real commands, I never remember them.<p>Oh, and also, why can't it save empty directories? Is there a clause somewhere saying that empty directory isn't important?<p>Oh, and last thing: We all know that branches are great and cool, but I'm still tired of rebuilding everything to make the history "cute".<p>My 5 cents, happy karma downing
======
aphyr
Really? I came from RCS, CVS, and SVN. Git's toolkit took about ten minutes to
get used to, but then I suddenly realized that it made _far_ more sense than
SVN. git add, git status, git commit -a. All repos are equal. Everything is
small, and everything is fast. Really fast.
There's rarely a need for git rm or git mv, since they're implicit. Waaay
better than SVN, where I could never remember if I had to move it by hand
first or afterwards, and half the time committed multiple copies of a file I
meant to rename.
The empty directory thing has never been a problem for me; my software creates
directories as needed. You can always drop a .file in the dir if you really
need it.
Branches work great. They're just pointers! Plus you can revert merges, and it
just _works_.
I never rebuild the history, but I guess if I was working in a large team with
complex patchsets I might. Our team of six seems to get along just great with
a central free-for-all repo.
Gitk is ugly. No two ways about that. :D
~~~
d0m
Oh, I never said SVN was better :)
------
sunchild
Commands make no sense?
git add .
git commit -am "feature working great"
git push
git branch new_feature
git checkout new_feature
git commit -am "new feature working great"
git push origin new_feature
How hard is that?!
~~~
d0m
There are lots of hidden step between all your commands sir, and you perfecly
know it.
Now, start removing files from your "git add ."
Or switch branches without losing your change.
Or rebasing.
I never said it wasn't possible, neither _hard_. But, the interface (read CLI)
still sucks.
~~~
sunchild
As far as I can tell, the only step that I left out is "git init".
Switching branches without losing your changes is poor form anyway. You should
be committing before you switch. If not, you can create multiple local
branches (ick!)
As for removing, it's familiar: "git rm removed_file"
------
zaius
You can easily make an alias for all of those commands.
I would say that "git add" is simpler than "git stage add". "git stage remove"
would definitely be nice. "git reset HEAD file" is a bit clunky. What would
"git stage list" do?
Saving empty directories is annoying. Just touch an empty ".gitignore" file in
the directory and you can then add the directory.
If you don't like making your history cute, and you don't see any advantages,
don't do it!
~~~
d0m
I won't fix git bad command line interface. You'll agree with me that it's
plain stupid to start creating aliases for such basic commands. However, I'll
be happy to create alises for more complicated stuff. I mean, I shouldn't have
to use alias to fix git' bad command line interface.
git stage list would be a kind of git status.
To stage file, I use git add. To view them, I use git status. To remove a file
from a stage, I use git reset .. That's just plain wrong and annoying. It's
like that for every part of git.
I don't see why saving empty directories is annoying. If I don't want a
directory, I'll just remove it? Or in the opposite, if I want an empty
directory, I'll add a directory? Why should I put a hidden file inside a
directory to have it added by git?
Finally, for the history, I'm just saying that's it's too complicated for
nothing. Of course I can avoided it. I also can use SVN and avoid branch
because they are complicated, but that's not the point.
~~~
zaius
I was agreeing that saving empty directories is difficult - it's annoying that
you have to put an empty file in them.
------
tetha
Hm, I am having troubles actually formulating why I think you dislike git. I
think the best description I can come up with is that git developers group
commands by different categories than you do.
Let's take a look at this: (1) git add puts things into the index, while (2)
git reset sets the working tree and/or the index back into a certain stage.
The git developers basically grouped "adding stuff" and "setting things back"
together, while you would like to group "managing the index" and potentially
"managing the working tree" together (such that "stage" is "manage index",
while "tree" (or something like that) would be "manage working tree").
So I think no one is really right or no one is really wrong and nothing is
really bad. There is just a perspective mismatch
------
cmelbye
If it's really so hard to memorize a certain command, make an alias.
My aliases: <http://gist.github.com/344530>
------
ozataman
If you are using a Mac, try gitx as the front-end.
------
d0m
Oh, and gitk is so damn ugly.
|
{
"pile_set_name": "HackerNews"
}
|
Show HN: Outlook on steroids - Biba
https://www.hiri.com
======
mrmondo
I don't think this website could be any more broken on a mobile device, none
of the buttons work, everything seems out of scale and the video doesn't load.
|
{
"pile_set_name": "HackerNews"
}
|
New fake SegWit2X: premining millions - jasonjmcghee
https://github.com/SegwitB2X/bitcoin2x/commit/08220e2a3c8ba8f53801302a8b7b6d6da5a39645
======
Klathmon
Again, I'll repost what I posted in another thread this morning:
This Segwit2X has almost nothing to do with the original "Segwit2X". They are
basically taking what they think is the good parts of all cryptocurrencies,
and are putting it into one monstrosity.
\- X11 based algorithm from DASH
\- 2.5 minute blocktime from litecoin
\- 4mb (closer to 8mb with segwit) blocksize from Bitcoin Cash
\- difficulty adjustment algorithm from Bitcoin Cash
Plus they are promising some pretty impossible stuff in the "future":
\- ZkSnarks from ZCash (which they list on the same line as "anonymous
transactions" in some areas, and on a different line in others...)
\- Lightning Network from Bitcoin
\- Smart contracts from ETH
\- Something called "offline codes" which god only knows what it means
Basically it's a shitcoin that is trying to lure people in with impossible
promises and just taking what they think is the best thing from every major
cryptocurrency and putting it into one disgusting abomination.
Oh, and they chose a loaded name to make sure they'd get some coverage in the
"news".
This "Segwit2X" has literally nothing in common with the previous "Segwit2X"
from november except for the name. Nothing else is the same. How they got any
exchanges to list it under the same name i'll never know...
~~~
wereHamster
Just tell me how to claim these new coins and convert them to BTC once they
are worth a bit (price usually spikes just after the fork). I'm extremely
sceptical of altcoins which fork off of the Bitcoin blockchain, but welcome
the free contribution to my retirement fund.
~~~
londons_explore
I am less skeptical of coins which split from the bitcoin blockchain than
coins which are entirely new.
By splitting from the bitcoin blockchain, the authors of said coin have no
easy way to 'premine' or get any other kind of 'first in' benefit without it
being visible in the transaction history (like this is).
That should help the coin succeed on technical merits alone, rather than
PR/marketing stunts and pump and dump schemes which work much better on new
coins.
It also prevents 'coin inflation', where value gets split across a potentially
infinite number of coins. By splitting from the original blockchain,
businesses which don't want to take a risk on which coin/coins will succeed
can simply demand equal amounts of every coin (or take payment in one coin,
and then diversify into equal amounts of every coin). People who hold coins
take no risk if additional coins fork off.
~~~
flashdance
You can premine chain splits. See bitcoin gold, which had a premine of 100k
coins, or the very coin this thread is about, with a premine of millions.
~~~
londons_explore
You can't _secretly_ do it though.
With a new coin on a fresh chain, a premine can be secret, and outsiders can't
tell the difference between a premine and the initial few users.
~~~
flashdance
Oh, I see what you mean.
------
aviv
Imagine if in the real world we would have people come up with new metals to
compete with gold, silver, platinum, etc. as viable stores of value. This
dilution and unlimited supply of cryptocurrency types will not be good for
Bitcoin in the long run.
~~~
jstanley
What evidence have you seen that suggests this project is remotely a viable
store of value?
~~~
scalablenotions
Massive and growing consensus and significant transfer of representative value
from FIAT.
------
nerdponx
So that explains the bizarre email I got from Yobit today:
_Bitcoin Segweet in 18 hrs!
Dear YoBit Users!
Bitcoin Segweet [B2X] balances (1:1 btc) will be added in 18 hrs (27 dec)
Timer: [https://yobit.net/en/b2x/timer/](https://yobit.net/en/b2x/timer/)
Sincerely yours, Team of Yobit.Net_
It's a junk exchange anyway, but now seems like a good time to reiterate my
question from the other day, which started an interesting discussions but not
actually an answer: who exactly is buying into these junk forks, when there
are high-quality altcoins on the market?
|
{
"pile_set_name": "HackerNews"
}
|
Principles we use to write CSS for modern browsers - vuknje
https://gist.github.com/alekseykulikov/68a5d6ddae569f6d0456b0e9d603e892
======
skrebbel
A bit meta, but I need this off my chest: I love how this document starts off
saying "this is for react apps". IMO every discussion about CSS coding
standards needs to start with context.
A lot of old CSS lore came from people who build websites. I mean those fairly
uninteractive things, focus on content. Blogs, restaurants, newspapers.
Building an application that happens to use the DOM as their UI toolkit is
totally different. The whole "reuse the same classes but with different
content" thing that CSS classes were designed for becomes less important, and
"reuse pieces of _behavior_ " makes a lot more sense.
There's probably more domains or subdomains that warrant their own CSS best
practices. But I'm totally tired of a blog designer and a react app coder
fighting on HN about how the other one is doing it wrong, when really they're
just solving different problems.
~~~
sanderjd
Well said. This goes (probably double) for the similar conversations about
javascript, where lots of people question whether it should be used at all,
which is a reasonable question for _sites_ but not for _applications_.
~~~
threesixandnine
Those people probably have static content sites with lots of js on their mind
when they say that, no? Ads, tracking code, etc....
~~~
sanderjd
I think they probably do, which is why I really liked the parent comment about
how people should be clear about what they're talking about so that we stop
talking around each other all the time.
------
MatekCopatek
IMHO, naming conventions such as SUIT, BEM, OOCSS and the like are NOT a good
practice, but merely a workaround for dealing with the limitations of a global
namespace.
My preferred solution are CSS Modules[1], Vue's scoped styling[2] or something
similar.
[1] [https://github.com/css-modules/css-modules](https://github.com/css-
modules/css-modules)
[2] [https://github.com/vuejs/vue-
loader/blob/master/docs/en/feat...](https://github.com/vuejs/vue-
loader/blob/master/docs/en/features/scoped-css.md)
~~~
davidkpiano
CSS Modules is still a "naming convention," just one that is auto-generated
for you. Naming conventions in CSS are as much of a good practice as
naming/linting conventions in JS.
~~~
vinceguidry
Auto-generation is a good solution because names are the brain's interface to
the code. If the names are auto-generated, and the brain doesn't have to look
at the auto-generated names, and the abstraction doesn't leak, then you can
consider the problem solved.
~~~
Jasper_
My devtools still doesn't show me the unmangled name. The abstraction leaks.
As a practicality, I prefer the naming conventions to the CSS processing
pipeline, because it gives me a much better ability to debug and iterate.
~~~
Klathmon
With sourcemaps you can get close, but it still shows the "mangled" class name
in the HTML.
But with a dev/prod environment, you can have your cake and eat it too (for
the most part). In dev we have our classnames be in the format of
`[classname]---[file-path]---[filename]---[partial-hash]` So one of my
classnames from a current project is `.container` in the file, but shows up as
`.container---src-components-scanner----styles---1446d`. And in production
shows up as `._1533JgnvGu096C2bCAkrxT`.
------
ulkesh
Some thoughts:
\- Good CSS design needs zero !important statements. Fix your specificity or
your component architecture if you have a need to use !important.
\- DRY is a good thing, not a bad thing. Maybe straight CSS isn't quite there
yet but...
\- Why not use the tools at your disposal to aid in development (and DRY) such
as SASS/LESS?
\- Flexbox will be great once IE dies the well-earned death it deserves.
I'm very happy the author had great success with their setup. What works,
works. But I hesitate to assume that just because it works without using DRY
principles or other tooling, it means you shouldn't.
~~~
daurnimator
> \- Why not use the tools at your disposal to aid in development (and DRY)
> such as SASS/LESS?
Because there is so much tooling churn + barrier to entry. The point is that
sometimes it's better to repeat yourself than to use the tool of the month.
~~~
shabda
Both the tools mentioned above LESS/SASS have been for 5+ years. There is
churn in frontend tooling, but it is not a valid reason for not using LESS.
------
al2o3cr
".ComponentName-descendentName" nested inside ".ComponentName"?
Remember kids, the cascade is TEH B4DZORS - so always include everything you
would have gotten from it in every class name. _headdesk_
Solidly delivered on the "no DRY" premise. Maybe they should coin a new
acronym like "WET": "Write Everything Thrice"
~~~
awesomebob
CRY: Continuously Repeating Yourself
WET: Write Everything Twice
Credit: [https://roots.io/sage/docs/theme-
wrapper/#fn2](https://roots.io/sage/docs/theme-wrapper/#fn2)
:)
~~~
gr3yh47
WET is standardly 'Write Every Time'
------
andybak
A random brain dump prompted by this statement: "No DRY. It does not scale
with CSS, just focus on better components"
All programming involves resolving a conflict between two different principles
and a lot of the fiercest disagreements are between people that weight the
importance of these two things differently:
1\. Reducing repetition
2\. Reducing dependencies and side effects
The language and it's tooling/ecosystem can affect the importance of these.
The project's complexity, rate of change and lifespan is also a factor that
might push you one way or the other.
But anything that helps one of these harms the other.
Thoughts?
~~~
MatekCopatek
I wouldn't say that they are always mutually exclusive. I think there is a
variable ratio between advantages gained by 1. and disadvantages caused by 2.
In other words, at first, reducing repetition will net nearly no negative
results - you just recognise different areas that do very similar things and
write a common functionality. The most basic example would be programming
languages providing standard libraries, even though everything could be done
with regular operations. At this point, abstractions are even simpler to use
than implementing things yourself.
Problems start to arise once you hit a certain point beyond which your
abstractions become harder to use and maintain than simply writing things
multiple times. This is where you should stop abstracting/modularising things
away (assuming that the reason is purely overengineering, not _bad_
engineering).
------
davedx
> Flexbox is awesome. No need for grid framework;
Yes, great, if you can ignore all the IE users. Is that what "modern" means?
I'd love to use flexbox where I work, but it's just not feasible to give up
all the customers we would lose.
~~~
freshyill
Why not both? Global flexbox support is >96% and in the US it's >97%.
Unless you're aiming for a 1:1 pixel-perfect experience in crappy old versions
of IE, it's negligibly simple to detect IE (or lack of flexbox support), and
just use something else. You can usually get pretty close to a lot of flexbox
layouts with display: table and related properties, and also falling back to
floats for others.
In a worst case scenario, you can provide old IE with a more mobile-like
experience and just let things stack up.
~~~
spdustin
Enterprise in the US. IE9 still rules, in many cases. I actually work all over
this space, do not tell me otherwise. And now I have to maintain two
codebases?
~~~
Roboprog
Been there, done that. But you might mention to bosses/clients that MS no
longer supports/patches anything older than IE-11 on desktop versions of
Windows. They are most likely using an un-patch-able version of Explorer.
Virii, Trojans and Hacks, oh my!
[https://www.microsoft.com/en-us/WindowsForBusiness/End-of-
IE...](https://www.microsoft.com/en-us/WindowsForBusiness/End-of-IE-support)
~~~
bigger_cheese
My work "upgraded" to IE 11 late last year. The problem is so much of our
internal infrastructure was built to target IE 8 (or earlier) that when our
information services guys deployed IE 11 they forced it to run in
compatibility mode.
Now you have to go through this endless dance of Enable/Disable compatibility
mode depending on what site you are trying to visit. We have a lot of non
technical users so as soon as you ask them to delve into menu options to use
some added functionality on a site you lose them.
Even technical users hate this so most people sideload chrome. However a large
number of workstation are locked down and those people have no option but to
continue with IE.
------
EdSharkey
Plug for test-driven CSS:
[https://github.com/jamesshore/quixote](https://github.com/jamesshore/quixote)
I've experimented with it on a green field project and got promising results.
Found I could refactor my CSS with confidence.
------
bbx
I agree with the Flexbox and DRY principles, but it's weird to still rely on
arbitrary naming conventions like SUIT when CSS modules have been around for a
while now.
Naming things has always been difficult, especially in CSS where it can lead
to merging/overriding/precedence issues. _Not_ having to _think_ about what
CSS class names are either available or possibly conflicting is a benefit from
CSS modules that increases productivity by an order of magnitude I've rarely
seen in CSS, especially in large codebases.
You've got a button and want to call it "button" and style it using ".button"?
Go ahead. It will only inherit styles if you _explicitly_ say so. The global
namespace remains healthy for each component you write.
------
prashnts
Personally, I very much dislike using CSS classes unless required. I prefer
having a clean markup with classes used only where they make sense.
For a context, I somehow can't wrap my head around writing something like:
<div class="nav nav-inverse nav-hide-xs">
when `<nav>` makes more sense. Sure, if you have a case with alternate
"inverse" navbar, go ahead with a class `<nav class="inverse">`.
About the flexbox, ah, well, even now they have undefined behaviour on several
elements such as a `fieldset` [1].
[1]: [http://stackoverflow.com/questions/28078681/why-cant-
fieldse...](http://stackoverflow.com/questions/28078681/why-cant-fieldset-be-
flex-containers)
~~~
talmand
These days using "nav" instead of <div class="nav"> is the preferred method by
default. The other two classes are just modifiers that may or may not be
required. There's nothing wrong with them.
Also, I see "fixed" bug reports in both Chrome and Firefox when using flex
with fieldset. To be fair, recent fixes.
------
Roboprog
FWIW, I'm using Angular (1.x) on a project at work, rather than React. One of
the things I did recently was take the CSS file used in the project ("Few
Pages App", rather than SPA, which _had_ a common .css file), and turn it into
an Angular (javascript) "directive".
I wish I had done this earlier. I have no "compile" step, it's just straight
js plus the framework. However, I now have a mechanism to use variables for
any (new) stuff with repeated settings, inserted into the rest of the text in
the "<style> ... </style>" template.
------
kowdermeister
I've developed my last 5 projects (corporate websites with many different
layouts) with SASS + Foundation 6 an no naming convention. Instead I relied on
nesting selectors.
I can behave and usually not go deeper than 4-5 levels. It's a really neat way
unambiguously tell a CSS rule where it should belong to. For example I can
create a
section.hero-block{ /* things inside are scoped */ }
CSS selectors who live outside are pretty basic, utility style ones, they
exists on one level, so they can be easily overwritten by the scoped ones if
needed.
~~~
Nadya
This is how I've developed projects for the past two years. Every module is
scoped. If it needs variations but contains the same HTML structure - it gets
a modifying class (.full, .half, .side, .footer, whatever)
If it has a different HTML structure - it is a different module entirely:
`.module.v1` VS `.module.v2`
Doesn't matter if 85% of the CSS is shared between v1 and v2. If the HTML
structure is different, it is a different version of that module. If you can
run a diff checker and return 100% the same HTML structure but you need a
different coat of paint, you add a variation class. _All modules begin as a
"v1"_. This prevents it from needing to be added to the scope selector if a
"v2" is ever added. I've yet to work at such a scale where the loss in CSS
performance was a problem.
Utility and State classes live in global space. Global being defined as
anything unscoped, not "everything in CSS is global space". Since everything
is scoped - I can safely reduce selectors. Very rarely does it go more than 3
levels.
I use some level of OOCSS but don't use it for things like `.floatLeft`. If it
is a style I will want to _remove later_ , typically for responsiveness, then
I don't want a class `.floatLeft` that is really `.floatNone` at a certain
size. I would rather take `.item` and change `float: left` to `float: none`
with a media query.
------
seag64
So, I don't do any front end work in my day-to-day, so this may be a stupid
question. This article starts out with how CSS gets a lot of negativity. What
alternatives are there? Do browsers understand anything but CSS for styling?
~~~
frankwiles
There isn't an alternative really. Like Javascript, it's all there is.
~~~
seag64
So are these other tools people are talking about some different methods that
just transpile back to CSS in the end or something like that?
~~~
gr3yh47
Yep (such as LESS, SASS, SCSS, etc)
------
zephraph
I think an important thing to note here is that this individual's team is very
small. If you have a small team that closely collaborates then scaling things
which take discipline (like CSS) becomes vastly simpler.
CSS is difficult because it takes so much effort to do things the "right" way.
It requires a good set of linting and testing tools or constant vigilance to
maintain a correct, robust system.
As the codebase or team grows the difficulty of that task increases. That, to
me, is why CSS often viewed in a negative light.
~~~
talmand
I believe you could say the same for a large number of coding languages out
there.
------
surganov
Recently I got into functional CSS:
[https://github.com/chibicode/react-functional-css-
protips](https://github.com/chibicode/react-functional-css-protips)
~~~
pluma
At that point you might just go full css-in-js and embrace inline styles.
I'm not sure which is worse:
A) having twenty classes on each element, each doing only one or two things
B) having twenty classes overlapping on the same one or two things
~~~
talmand
I would say both A and B are incorrect.
------
jorblumesea
Funny how all the original markup and languages are becoming the machine code
of the web. No one writes js anymore, just compiles into js. No one writes css
anymore, just SASS -> css. Html? Nope, directives or shadow dom.
|
{
"pile_set_name": "HackerNews"
}
|
How to operate before incorporation? - bmaier
What is the best way to run your startup before incorporation? For example if I were getting a startup ready for Y combinator and you recommend not incorporating beforehand, how do you recommend handling any business transactions that would occur in the meantime?
======
SwellJoe
DBA or LLC. Costs $20 or $200, respectively, and each takes about an afternoon
to fill out the forms and file. Or just do business as an individual. If you
don't need a business bank account, you can do that too. PayPal and Google
checkout will accept money on your behalf without much trouble.
|
{
"pile_set_name": "HackerNews"
}
|
TeenSafe phone monitoring app leaked thousands of user passwords - rkho
https://www.zdnet.com/article/teen-phone-monitoring-app-leaks-thousands-of-users-data/
======
djrogers
This is absolutely inexcusable- requiring customers to disable two-factor auth
and storing passwords in plaintext? Seriously, there should be a minimum level
of competence required to run a business...
------
rkho
It looks like TeenSafe kept Apple IDs and their corresponding passwords stored
in plaintext. The service required parents disable two-factor authentication
on those Apple IDs for the service to work.
|
{
"pile_set_name": "HackerNews"
}
|
Ask HN: Where to check YC alumni names? - ahmedaly
Someone I know from Egypt is claiming he is a yc alumni.
I am not sure how to check if he really is?
======
jacquesm
As what company, what year then check:
[https://www.ycdb.co/](https://www.ycdb.co/)
Find the company and check the team page; use google to read up on the person.
Caveats: sometimes people have the same name and yet they are not the same
people, not all founders are always listed on the team page, especially not
for companies that have pivoted, not all companies have a team page.
Success rate for this sort of thing is pretty good if you invest some time.
------
dang
If you email hn@ycombinator.com we might be able to help.
~~~
ahmedaly
Thanks so much!
|
{
"pile_set_name": "HackerNews"
}
|
Science Is Getting Less Bang for Its Buck - dsr12
https://www.theatlantic.com/science/archive/2018/11/diminishing-returns-science/575665/
======
protonfish
This seems like the obvious result of many of the unhealthy forces against
science today: replacing professorship with part-timers, rewarding publishing
quantity over quality, pressure to promise a quick monetizable result over in-
depth basic research.
I don't buy the "low-handing fruit" argument. I think it vastly overstates the
fraction of scientific knowledge that we possess. Contemporary scientific
research is in a state of crisis: unhealthy and dysfunctional. Until we pay
the most talented researchers to spend decades on in-depth study instead of
spending pennies on segmented, p-hacked, post-hoc conclusion garbage papers,
major discoveries will continue to disappear.
~~~
706f6f70
> Until we pay the most talented researchers
It's not even that. One of the contributing factors is that we are no longer
hiring for talent or merit. If you're a male child interested in science and
in high school right now you've been through 8 years telling you that you
should not go into science because they need more women of colour who follow
the right religions and have the right sexual orientation. You're constantly
bombarded with "SCIENCE DOES NOT WANT MEN".
And sure, we can say "well if he had the necessary grit, this would not
discourage him". No. No amount of grit will get you in when governments are
shaping science funding to punish having male staff. No amount of grit will
get you in when indoctrination training is mandatory for the selection
committees. No amount of grit will get you in when the first filter in a
hiring process is to remove all males applicants.
I'm sorry but science has to go through a decline in the West for a bit. We
still have some remnants to be worked through the system, but the next couple
generations of Western scientists are going to be generally pretty low
quality. On the bright side, science is not exclusively a Western tradition at
this point. Other cultures that at least pretend to still focus on merit and
talent will carry the scientific tradition forward.
~~~
Wowfunhappy
> If you're a male child interested in science and in high school right now
> you've been through 8 years telling you that you should not go into science
> because they need more women of colour who follow the right religions and
> have the right sexual orientation. You're constantly bombarded with "SCIENCE
> DOES NOT WANT MEN".
Do you have any examples of teens / young adults being discouraged from going
into science because they are white and male? There are certainly some (I'd
say good!) programs to encourage women and minorities, but that's quite a
different thing from men being actively discouraged.
Scientific fields are still significantly more male than female, even among
recent graduates.
~~~
v_lisivka
> Scientific fields are still significantly more male than female, even among
> recent graduates.
Why "still"? Majority of mans are genetically selected to be adventurers,
while minority of woman are adventurers. It's impossible to change genetics
with advertising, so we will never have 50/50% split, until human genome will
change. Why not just accept natural distribution?
~~~
max76
I'm not convinced that joining the scientific community is an adventure or
that this genetic difference exists. The differences can be explained without
using genetics.
For example, there is a popularly held opinion that girls are bad at math.
Girls who believe they are bad at math might try less in math courses. Girls
that receive lower grades in math from trying less might be intimidated by the
math requirements of a science program.
~~~
dnautics
It's not an adventure, but it is a high-risk with a slim chance of a very high
payoff endeavor, and men are typically more drawn to that than women, for
example there his a huge gender disparity in fishing the Alaskan fisheries or
base jumping.
~~~
v_lisivka
Offtopic: Can you explain to non-native English speaker, please, why
"adventure" does not match "a high-risk with a slim chance of a very high
payoff endeavor"? I saw "adventure in science" few times, so I assumed that
someone who does this "adventure in science" can be named "adventurer". What
is wrong in my conclusion?
~~~
max76
I wouldn't consider a career in science to be high risk. It's often easy to
transfer from academic science to private engineering. Science degrees holders
have respectable earnings on average.
------
madhadron
I'm not sure where to begin on this, so I'll resort to a brain dump.
The Nobel prize has become steadily more political. More senior researchers in
the field have had more time to accrue political power, so expect the awards
to reach further back in time.
Research productivity has dropped, though. Today a professor in biology has a
more than full time job just to get money for their lab. So you go through
gradschool, go through postdoc, and when you've got all this training, you
stop doing science and become a grant writer. If you don't bring in money, you
don't have a lab.
For the postdocs and grad students, they live very insecurely. Their pay is
poor, they live knowing they will probably relocate somewhere entirely
different in three or four years, so there's no point having links into the
local community. They work for someone has no training in management and has
no time to deal with them if the money is going to flow. And there are still
lots of places where the postdocs aren't considered full time employees of the
university and don't have, say, health insurance. These are hardly the
conditions that you can expect to produce good work.
Fields are mined down to the last details rather than looking at unexplored
areas nearby. There are many reasons for this. As a grad student you work on
what your advisor works on. As a postdoc, you work on what your lab works on.
As a professor, you work on what you worked on as a postdoc and it requires
multiple years and lots of luck to reorient a lab. You need a community to
provide evidence of your competence in order to advance at each stage and get
tenure, which you can't easily get if you wander out of an established field.
Nor is there any training on how to look effectively find things to work on.
~~~
lumost
Is this only a US and European problem? when I was in college in the oughts
their was a popular trend in Physics PhDs to move to china following there
Grad program as China provided a block grant to them to start a new lab.
The pattern whereby professors have to apply for small grants continuously to
keep a lab funded and operated is an inordinate waste of time vs. a block
grant with periodic evaluation.
~~~
dnautics
Keep in mind by small grant we mean 100-200k towards a million.
~~~
madhadron
Which _is_ a small grant. You would be hard pressed to run a five year
software project on a million dollars, much less a project that requires real
laboratory overhead and materials.
------
kuanbutts
Quickly scanning comments I do not think anyone else has brought up:
administrative bloat.
More money is being spent on science, but is more money actually making it
through the administrative bloat encumbering most institutions to the actual
performance of research?
Anecdotally, I have a colleague who has received funding from the NSF and the
amount of regulations and paperwork and various travel and meeting-related
obligations related to the funding soak up so much of the actual dollar amount
supplied. (You have to use your funding dollars to satisfy the various
required meetings, travel, and paperwork-filling.) The constraints are so
ridiculous that satisfying them consumes nearly all the resources the NSF
provided, and the little that remains is actually not sufficient to perform
the research with. Worse, he has now wasted months of his time satisfying
various oversight requirements administrated by both the NSF and the research
institution he works in, leaving him an unreasonably small amount of time to
actually achieve any significant progress on his work. Once this round of
funding dries up, he will be left with no choice but to repeat the process in
order to secure some more funding to continue to barely make progress on his
stated research goal.
If I had to make up a number to describe the dollar efficiency of research
funding, in some cases I might assert it is negative: Not only is it just
being soaked up by self-serving, efficiency-draining administrative
requirements, it literally destroys the most valuable resource (time!),
leaving the researcher with none to actually engage in their subject matter of
expertise.
~~~
buboard
it's huge. Similar in an european ERC-funded lab: the PI is constantly
traveling, there is little oversight of the work let alone actual scientific
output. It feels like a large portion of the funding is designed to keep a lot
of people people busy doing nothing.
~~~
nonbel
At least the US government treats it like a "jobs program", just like
everything else.
------
Yaa101
In my opinion science is getting less bang for bucks because the low hanging
fruit has been picked by now. The big fundamental structures describing nature
its working are more or less known. It's about details nowadays and it takes
more time and effort to get the details right. You can see that it takes multi
disciplinary teams nowadays to discover the connections between these large
systems and how they (how all in the universe) connect. People are curious
which is our most precious gift from nature and science will go on as long as
people stay curious. But yes, not all science is about returns of value in
economic sense, a lot of it is fundamental and also often results of science
is laying in a drawer waiting to either getting monitized or used for further
discovery. Science moves by spurts and hiccups, not straight forward with an
even pace.
~~~
forkandwait
> The big fundamental structures describing nature its working are more or
> less known. It's about details nowadays and it takes more time and effort to
> get the details right.
This (bullshit) is what they thought in the 1890s,and is why physics was
considered boring, basically a dead field. Oops!
~~~
Koshkin
Difference is, this time around we might be standing at the edge of what we
can possibly know and/or understand.
~~~
davidivadavid
And how could we possibly estimate if we are or not?
------
Animats
That observation is decades too late. Big companies set up sizable research
labs in the 20th century. After WWII, it was expected that a big industrial
company would have a sizable R&D operation.
Then, in the 1980s, those big R&D operations stopped paying off. Gradually,
the big corporate labs closed - Bell Labs, RCA's Sarnoff Labs, Xerox PARC,
Westinghouse Labs - gone. IBM's labs are far smaller than they once were. The
payoff wasn't there. The easy hits from research were gone.
~~~
marcosdumay
Hum...
My reading is that R&D facilities couldn't compete with the sheer amount of
small improvements that came from the uneducated labor force. Japan grew out
empowering the employees to do development, and the entire world copied it.
What leaded us on a path where people are making the Uber-for-X apps and some
actually getting rich that way.
And that is exactly the opposite of a sarcity of low hanging fruit.
------
ozborn
Another problem with this data is the rise of team science and huge author
lists for papers - something the Nobel prize isn't really able to deal with.
Think of the human genome paper or the Higgs boson paper - both good 21st
century science but lacking the easily identifiable "super scientists" the
Nobel committee is looking for.
Additionally it is hard to fully evaluate the impact of scientific work fully
until many years have passed since publication.
It's no wonder they prefer to find their winners in the 80s. ..
------
pvaldes
A lot of 'money spent in science' has not really been spent. Just offered with
lots of requisites that nobody can reach and then recycled for other projects.
If granted, the money can be blocked for months (instead to pay an entire year
to scientists, you pay 7 months... for one year of work), and can be partially
siphoned off again using bureaucracy. Some politicians are very fond of this
'miracle of bread and fishes' trick.
"In the last ten years we spent '10 millions' in science projects". Looks
great.
"In the last ten years we flashed the same million 10 times before to put it
again in our pocket". Not so great, often the real thing.
------
justaguy1212
I actually don't see this from his data. It seems more like he started with a
hypothesis, wrote the article as if he was correct, and then gathered data to
try to support it. Then after seeing the data wasn't as commensurate with his
idea, he wrote around the inconveniences.
~~~
ImaCake
Yep. I don't know what article the other people in this thread read, but it
wasn't the same one I did. At best, the graphs can conclude that scientist's
opinion of nobel prize winners stays about the same, regardless of time.
------
DenseComet
This is why science should be led by the government, not for profit companies.
Science is for the greater good of humanity, it should not be solely about
making money.
~~~
CuriouslyC
I love science (at least in the ideal form). That being said, if knowledge
provides a real, tangible benefit to mankind, people should be willing to pay
for it. If it doesn't, people shouldn't be forced to pay for it.
I agree that some valuable research would go undone if performed by companies
because the time window to see a return on investment would be too long.
Unfortunately, academia is a huge mess, and we really need a new approach to
basic science.
~~~
Konnstann
What's your definition of basic science?
The requirement for Science to produce real, tangible benefits to mankind for
it to get funded makes a lot of it unfeasible, because a good bit of it is
useless on it's own, but is good to figure out because of potential uses in
the future.
~~~
bluGill
The real problem is we won't know if something is useful until after we get
results.
Gravity is a perfect example - physics has a lot of open questions on gravity.
What if someone closed one of them? It could mean nothing, just a slightly
better understanding of why things fall - or it could be a major change that
allows us to create "anti-gravity paint" making a self-supporting colony on
Jupiter simple (image all the "land" you could own there - whatever "owning
land" on gas planet means)
~~~
AstralStorm
Come on, such a thing would allow for reasonable space travel even better than
for just making colonies. It is essentially Mass Effect Element Zero - you
could create gravity wells for things to "fall into", creating enormous
accelerations of space ships while at the same time protecting the crew from
the effects.
Potentially even a kind of faster than light travel.
As a side boon new kinds of materials can be created in extreme artificial
gravity conditions some this maps to extreme and impossible to achieve
otherwise pressures. Including just creating stars in a controlled environment
making for immense energy availability. Instead of magnetic fusion containment
you'd have gravitational fusion containment.
This is how powerful practically breaking gravity can be.
~~~
bluGill
I just hand wave assumed a slightly different set of rules. With your set of
rules (which is at least as likely as mine) you are correct. Since both of us
are speculating we are both equally right. Of course it might be that better
understanding of gravity proves that artificial gravity is impossible.
------
AstralStorm
I'd personally bet on some sort of major breakthrough in space travel or
energy sources or both (perhaps due to new physics) to have comparable results
to the decades of the atom.
Being that, there is just not enough gain or even potential gain of
technologies being surveyed right now, even considering full on genetic
engineering and universal nanoconstructors.
Well, unless immortality or nigh immortality of some form happens. Or we found
a way to expand our intelligence and performance in some huge way. And then it
becomes available.
Compared to atomic and subatomic physics AI is a joke and genetics is at witch
doctoring levels. Self driving cars are like the steam experiments in 1800s at
best, with even less of an impact.
At least genetics has potential to change everything, more than automated
transportation ever can.
Nanotechnology is making some progress but not nearly fast, good or cheap
enough.
Another breakthrough could come from side physics like photonics.
------
seizethecheese
This could also be titled: “the foundational truths in a field are more
consequential than subsequent findings”.
------
vharuck
What if the survey respondents value earlier discoveries more because they
understand them better? I remember my mathematics professors saying, once they
chose a specialty within a field, there wee only a dozen or so people in the
world who could discuss it on the same level. So concurrent appreciation is
rare within a single field, let alone across fields.
But, as a high schooler, I was taught the basics of Einstein's general theory
of relativity. I couldn't do the math, but I appreciated its value. Maybe the
respondents undervalue recent discoveries in different fields.
~~~
LeonB
Effects like that could only be measured if they repeated the same survey many
times over decades before publishing. Even then it would be confounded by a
lot of other factors.
Instead they’ve marched forward using junk science to discredit science. I
found it a really weird and off key article.
They might be right but I wouldn’t use this article to demonstrate it. (And I
otherwise have a lot of respect for the authors.)
------
rjkennedy98
A huge part of this is the general perversion of science. It starts at the
bottom with the abuse of p-hacking by lowly grad students. It goes all the way
to the top with fraudulent for-profit science by Big Pharma to get drugs and
medical devices approved that don't work. All of it is part of the academic-
medical industrial complex that views science not as process for discovering
truth, but as a tool that can be used to generate profit.
There are enormous breakthroughs that I know are very close, but simply
inconceivable due to the academic-medical establishment that has stranglehold
on science.
------
alpineidyll3
Ex-professor here. Reason for this is trivially obvious to any outsider
observing the day of an academic: A dazzling torrent of time-wasting grant
applications, awards and other such nonsense.
The system is basically designed to take productive scientists, and waste them
as quickly as possible. After all, how better to protect your mediocrity?
Never been happier than after leaving...
------
socalnate1
Does anyone else find it fascinating the Patrick Collison (the co-founder and
current CEO of Stripe) co-wrote this article?
The dude makes me feel inadequate like no other.
------
chiefalchemist
Probably impossible to find out, but seeing a comparison - however loose - to
other countries would be ideal.
Annecdotally, it seems there's at least one significant leap per week listed
on HN that can be attributed to China. Reaching 100 million degrees from a
fusion reaction (from earlier this week) comes to mind.
But of course the general public isn't enlightened by the mainstream media
about such things.
~~~
pas
The Chinese fusion reactor was not a leap. Already achieved in JET (Joint
European Torus).
That said, if China decides to pour a lot more into R&D, it'll be doing leaps
in no time, simply due to brute force.
~~~
chiefalchemist
Link to JET achievement? Please??
Chinese? European? Any comparison would be better than none.
Brute force? Does it matter? Results are results, yes?
The arc of my point is, the USA so often as a very self-serving, often myth-
based view of itself. It's as if no one else in the world might have a better
approach. That's a mistake.
~~~
pas
JET achieved 0.7 Q, 24MW in 16MW out.
It operates somewhere around 100-200M Kelvin.
[https://www.scienceinschool.org/2013/issue26/fusion](https://www.scienceinschool.org/2013/issue26/fusion)
> Brute force? Does it matter?
No, I doesn't mean to say it as a negative thing. Fusion research funding is
very low compared to what is needed to build big enough devices. Because when
it comes to fusion, size matters, as efficiency goes up with size.
And that's what I meant by brute force. China can simply build a bigger one
and reap the benefits of size.
> Results are results, yes?
Yes, and more data is always better in plasma science.
------
pascalxus
Why do they assume something went wrong? At some point, we ill reach a point
where there is far less left to discover or invent. (other than AI -> as long
as anyone has a job, there's always room for automation, yikes!)
We assume that technological progress can continue forever and ever. I think
this is an incorrect presumption.
------
jgalt212
Excellent book on this topic.
Big Science: Ernest Lawrence and the Invention that Launched the Military-
Industrial Complex
[https://www.amazon.com/Big-Science-Lawrence-Invention-
Milita...](https://www.amazon.com/Big-Science-Lawrence-Invention-Military-
Industrial/dp/1451675763)
------
netcan
Considering how large the corporate sector is today and how corporate inspired
the government sector is today... Science is remarkably unaffected.
We have tenure and sociology departments, a teacher-researcher combo
tradition... All stuff that does not lend to ROI calculations or the kinds of
resource allocation done elsewhere.
------
cossatot
I'll agree with Yaa101 that a big part of the story is the picking of low-
hanging fruit, or to put it less metaphorically, that in the first 100 years
of institutionalized science (let's say 1850-1950 without getting caught up in
the exact dates), there were a lot of fundamental questions that could be
addressed through the application of relatively systematic, rigorous
observation and experimentation, and modeling with the kind of math you can do
on a chalkboard.
Within this time period, though, a lot of these questions were addressed and
the _new_ questions that arose required more data, better instrumentation, and
more advanced mathematical modeling techniques to address.
In my own field (geology, in particular tectonics and earthquake studies),
this was laid out in a very explicit manner: the fundamental mode of
observation is geologic mapping, and the terrestrial surface of the earth
slowly got mapped. The mapping of the past may be refined or re-interpreted
but rarely does it need to be redone from scratch. It is done to a reasonable
level of resolution. There is still a lot of unknown under the ocean basins,
but we have strong theoretical and empirical arguments for why those areas are
not as complex as continents and therefore less interesting.
The late 1960s through the mid 1980s saw the development of plate tectonic
theory which _completely_ revolutionized the science. Now, 50 years in, we
have some second- or third-order questions but most of the first-order
questions have been addressed.
Today, the major developments of the field come from better instrumentation,
for the most part. In the sub-field of tectonics, progress comes from the
development and application of new methods for dating rocks or other geologic
features (including things like exposure dating or 'how long as this rock been
at the surface of the earth'), and from using satellite-based measurements of
earth deformation (GPS and radar interferometry) to actually measure the
motion of tectonic plates and sub-plates. Additional, continuous refinements
in seismic imaging of the subsurface (driven by the oil industry primarily)
has also been very helpful.
This stuff is really expensive! It's hard to go camp out, hike a bit, make
some observations, and write a good paper. The instrumentation to get the age
of rocks might cost $100,000 and then when consumables and salary are factored
in it might cost $500-$2000 per sample. You might need 10-20 samples to really
find out anything new in your 10-km by 10-km area of interest. And I believe
that geology is quite cheap relative to high-energy physics or whatever. Major
geophysical experiments can cost millions. We piggyback on physics and other
tech to a large extent---launching GPS satellites for example, or using
obsolete particle accelerators for geochemical measurements. The oil industry
spends (tens of?) billions a year acquiring data as well but very little of it
becomes public or available to researchers, though the cumulative data release
from industry is significant. [NB, I may be 1 order of magnitude short on any
of these numbers.]
In general there have been very few major theoretical advancements in the past
20-30 years. We have gotten better at recognizing coupling between tectonic
process and earth surface processes, and as instrumental datasets slowly
increase (as we observe more earthquakes, etc.) some smaller boxes get
checked. However, a geologist from the mid-1980s would be able to navigate
today's scientific landscape pretty well. The fads are different but like any
fashion, many are cyclical and were fads in the 80s too.
I personally see advancement coming from better statistical and numerical
modeling, and the availability of high-quality global datasets (primarily
created through large international collaborations, which is a post-cold war
thing). I also see a lot of room for improvement in our understanding of the
coupling of mechanisms spanning vastly different timescales--for example
earthquakes occur in seconds, post-earthquake phenomena last weeks to decades,
the earthquake cycle lasts hundreds to thousands of years, and the cumulative
deformation from earthquakes and related processes is what we call 'tectonics'
over millions of year timescales. It's really hard to make a single numerical
(i.e., finite element) model that works over all of these timescales and is
driven by basic physics (i.e. an earthquake results from forces applied rather
than being imposed). Nonetheless there are almost certainly a lot of really
important coupling processes that occur on these different timescales but they
are really hard to analyze (and a lot of interesting stuff happens at 20 km
depth and at mm/yr rates, which is pretty damn hard to observe).
So I guess I see a lot of 21st century science as bridge building rather than
outright discovery. This is fine. An analogy would be moving to a new country.
The first bit is the discovery of the place, then you learn what language they
speak. The learning doesn't stop there. As you learn the language, a lot of
daily stuff makes sense. As you gain an understanding of the culture and the
history, and can interact in a meaningful way, the value of that learning
continues to increase, but it doesn't feel like 'discovery' like it did in the
first year.
------
lostmsu
Yet we directly observed gravitational waves, and compute power continues to
grow.
------
oh-kumudo
Human have succeeded most of the applicable rules of our physical world. What
left to be discovered requires much more time and effort.
------
jrochkind1
An unpopular possibly "devil's advocate" opinion: Considering that our
scientific advances have led to us quickly using an enormous developed
capacity to make the earth much less hospitable to human life, maybe a
slowdown in scientific advances is not a bad thing.
On the other hand, yes, we are going to need some science to deal with what we
have wrought without maximum misery. But I don't really trust us with it.
------
fastaguy88
What a joke. There are hundreds of thousands of papers published each year,
and some very small faction of those (but certainly hundreds to thousands)
make a bit impact on their field. But we summarize science based on three
yearly prizes? Clearly science is not having enough impact on science policy
writers.
------
a_bonobo
I find this article preposterous.
>While understandable, the evidence is that science has slowed enormously per
dollar or hour spent.
The only evidence for this is Nobel Prizes won, split up by decade, and
polled!? Biology, arguably one of the most exciting fields right now, doesn't
even get considered in the Nobel Prizes! You _can_ get a Nobel Prize in
medicine/physiology for biology-related efforts (see GFP), but there's no
Nobel Prize for plant-related biology (that's what the Kyoto medal is for,
which isn't mentioned here?!?)
I can't wait for science-illiterate politicians to take this ('Look at what
YCombinator is saying!') and say we should defund science.
It has _never_ been as hard to get funding for your science as now - see e.g.
[https://theconversation.com/with-federal-funding-for-
science...](https://theconversation.com/with-federal-funding-for-science-on-
the-decline-whats-the-role-of-a-profit-motive-in-research-93322) \- with huge
issues (scientists living in precarious circumstances, ridiculously low
salaries, attacks and defunding by conservative governments (US, Australia,
Harper government in Canada) - writing an article _now_ saying that science is
stagnant is foolish at best, dangerous at worst.
>Over the past century we’ve vastly increased the time and money invested in
science, but in scientists’ own judgement we’re producing the most important
breakthroughs at a near-constant rate. On a per-dollar or per-person basis,
this suggests that science is becoming far less efficient.
A much simpler explanation is that we've picked the low-hanging fruits (think
of how diverse the work of Darwin was! He picked up novel fossils literally on
beach walks), now (especially in physics!) we need bigger and stronger efforts
to go for the harder fruits.
~~~
CompelTechnic
>>Over the past century we’ve vastly increased the time and money invested in
science, but in scientists’ own judgement we’re producing the most important
breakthroughs at a near-constant rate. On a per-dollar or per-person basis,
this suggests that science is becoming far less efficient.
>A much simpler explanation is that we've picked the low-hanging fruits (think
of how diverse the work of Darwin was! He picked up novel fossils literally on
beach walks), now (especially in physics!) we need bigger and stronger efforts
to go for the harder fruits.
Your explanation that we have picked the low-hanging fruits is not mutually
exclusive with the increased time and money invested in science. They would
even be two sides of the same coin- as the easy pickings disappeared, we chose
to invest more to get the harder fruits. As the trend continues, the search
space for novel, useful discoveries becomes larger and larger, and the cost
increases.
How do you prove to the politicians that a particular scientific investment is
worth it?
~~~
a_bonobo
>How do you prove to the politicians that a particular scientific investment
is worth it?
I don't think that's possible. The history of science is full of science that
shouldn't have worked but did - think of Barry Marshall drinking Helicobacter
pylori, which no-one thought to be a causative agent, or think of Norman
Borlaug, who came up with shuttle breeding (grow a plant twice a year by
driving seeds around, at a time when people thought that you have to let the
seeds rest for a while) - but the history of science is also full of things
that should have worked but didn't, we just don't hear about those (classic
survivor bias).
In some cases, you can tell whether an experiment is going to work and how
it's important, especially if it's incremental work - in some cases, you
simply cannot predict.
Just think of how delighted G. H. Hardy was that his mathematics was kind of
useless to the general public, and how useful his work is nowadays (most
importantly the Hardy-Weinberg equilibrium I guess?), he could not have
predicted how that works out! What would he have written into his application
for funding?
------
buboard
I don't understand their methodology at all. Considering that the frequency of
awarding nobel prizes remains the same , there is nothing that can be infered
from their survey.
That said i agree with the premise of their article that science is not living
up to expectations, and i believe this started somewhere in the 90s. For
example consider the recent "Burden of disease" survey which was linked here a
few days ago:
> GBD 2017 is disturbing. Not only do the amalgamated global figures show a
> worrying slowdown in progress but the more granular data unearths exactly
> how patchy progress has been. GBD 2017 is a reminder that, without vigilance
> and constant effort, progress can easily be reversed.
[https://www.thelancet.com/journals/lancet/article/PIIS0140-6...](https://www.thelancet.com/journals/lancet/article/PIIS0140-6736\(18\)32858-7/fulltext)
This effect was probably many years in the making , and is only now becoming
apparent. One possible explanation for the "sluggishness" in scientific output
is that nowadays it is lacking new grand ambitious projects, in other words
centralization and Big Science. The proliferation of PhDs has changed the way
funding is allocated in recent decades for purely political reasons. It favors
thousands of small grants that go to individual independent researchers who
are all studying minute effects, are looking for results that barely pass the
significance threshold and are publishing in order to build publication
records to advance their career. The total number of researchers has doubled
since the 80s , therefore this model may be actually detrimental to the
process of scientific knowledge discovery at this time. For example, in my
field of computational neuroscience, there is currently no equivalent of the
"large hadron collider" project for the brain. Some years ago one such
project, the 'Human brain project' was proposed and funded, which had a
specific and ambitious goal: to simulate the entire brain. Academic politics
however fundamentally altered the project and it is now a funding source for
various kinds of ordinary research. To be clear, the project was probably ill-
conceived from the start (imho), given that we don't have enough info to
simulate the brain correctly, but regardless it was one of the few such
efforts towards a singular ambitious goal. Subsequent funding schemes such as
the US Brain initiative do not have such a focus. The ones who do undertake
big science are private institutes like the Allen institute in their attempt
to accurately map the entire brain. In any case i think public policy has to
focus less on academia and more on science, in order to make progress
otherwise we are bound to see detrimental effects a few years down the road.
------
otikik
The graph should be adjusted by inflation, and probably by other factors, in
order to accurately back the article's claim. $1M in 1940 would be equivalent
to $17M in 2018.
~~~
jblow
The graph states that it is adjusted for inflation.
~~~
otikik
Ok thanks. I must have missed it.
------
agumonkey
science having its own logistic curve ?
------
DoctorOetker
A lot of the comments are listing their personal suspicion of causes, without
attempting to illustrate or provide evidence. Some of those suspicions are
probably true, and some false. Obviously it is also an outlet for all our
personal gripes with the system. Irrespective of the veracity of a proposed
cause, I would like to see more discussion of actual examples of current
progress, and trying to identify what caused or enabled the progress to be
made by comparing with the average paper in the back of our mind: what was
different, why did they achieve the progress today and did nobody achieve the
insight say five years ago?
Ideally, since we are discussing on HN, it should be an example that would be
understood by most participants here.
It is in this spirit that I will give exactly such an example of recent
progress (with which I am entirely unaffiliated).
First some minimal background which I assume you are _not_ familiar with:
quantum chemistry and solid state physics software.
Just take a quick look at this list on wikipedia, you may recognize the names
of some pieces of software like ABINIT... make sure you pay attention to the
DFT column, and that virtually all packages support DFT calculations, which
has been pretty much __state of the art for the last decades __.
[https://en.wikipedia.org/wiki/List_of_quantum_chemistry_and_...](https://en.wikipedia.org/wiki/List_of_quantum_chemistry_and_solid_state_physics_software)
In physics gradients often arise naturally for example forces as gradients of
potential energy... But in general even outside physics gradients are useful
in nearly all field for optimization...
Now the part you probably already understand: AD, Automatic Differentiation or
Algorithmic Differentiation, and in the often occuring case of a single scalar
function in N variables, _reverse-mode_ AD... And that it takes on the order
of 5 times a single function evaluation to calculate a gradient.
Now specific subfields of physics have been using AD and adjoint sensitivities
for a long time (nuclear engineering, oceanography) but it is not a standard
part of physics curricula.
Outside of these specific subfields, __Automatic Differentiation has been
gaining momenta over the last decades __(books, comprehensive reviews, ...)
Physics students of course learn differentiation symbolically on paper for
short formulas, or in software packages like MACSYMA, Maple, ... but even then
you keep the number of variables low for tractability. These students will
also understand you can emulate differentiation numerically by using a finite
delta:
d f(x1, x2, ..., xN) / dx2 = ~ [f(x1, x2 +delta, ..., xN) - f(x1, x2, ...,
xN)]/delta
fully understanding that for a complete gradient you need (N+1) evaluations of
f, that delta too large will be inaccurate due to functional nonlinearity, and
delta too small will be inaccurate due to numerical rounding of floats...
Nobody thinks of showing reverse mode automatic differentiation to physics
students as part of their curricula! If you crash into a physics course and
ask the students how to calculate the gradient of a big function of 1000
variables, they won't be able to help you, but then you can explain that such
a thing is in fact possible!
You see where this is going...
What if some of the numerical computations, say molecular modeling, could
benefit from this insight?
That's exactly what happened recently:
[https://arxiv.org/abs/1010.5560?context=cond-
mat](https://arxiv.org/abs/1010.5560?context=cond-mat)
Now look at the authors' institutions of this paper:
SISSA, International School for Advanced Studies, 2 DEMOCRITOS National
Simulation Center, Quantitative Strategies, Investment Banking Division,
Credit Suisse Group
That's people with an interdisciplinary background.
Could it be that we are over-specialized? Or perhaps mis-specialized?
Imagine an alternate world where there is so much math that mathematics has
decided to specialize thus: after a common course of Fundamental Math, you can
specialize in Definitions, or perhaps in Theorems, or perhaps in Proofs, or
perhaps in Conjectures... clearly their progress in math is going to suffer!!
Could it be that experts and professionals have become "too polite" to the
point of circle j* ? A kind of "politeness omerta"? Imagine a mathematician
and a physicist talking: as long as we're discussing math, the physicist nods
with interest, and doesn't make suggestions how he would do it differently and
vice versa. The "don't criticize a professional in his domain" attitude?
Wouldn't it have been better if someone who understood AD say 20 years ago,
who ends up in a conversation of molecular chemistry software physicist to say
something like "I don't have the domain knowledge, but first thing I would do
is find out if the calculation could be viewed as the computation of a
gradient, or else as an optimization of an explicit goal" "how dare you make a
suggestion outside of math?! and thats thousands of variables, its
_intractable_ " ?
We might have had this 20 years ago! Mansplain it if necessary.
it's this upside down world where we are polite and stoic in our papers, but
elbow working back stabbers undermining the workgroup next door, all the while
feigning this "politeness professional omerta"
|
{
"pile_set_name": "HackerNews"
}
|
The trauma of Facebook’s content moderators - donohoe
https://restofworld.org/2020/facebook-international-content-moderators/
======
DavidVoid
That was good article and I think something really needs to be done to make
content moderation more humane.
This quote from the article
_“It gets to a point where you can eat your lunch while watching a video of
someone dying. … But at the end of the day, you still have to be human.”_
reminded me of a similar article published by WIRED six years ago [1].
_Eight years after the fact, Jake Swearingen can still recall the video that
made him quit. He was 24 years old and between jobs in the Bay Area when he
got a gig as a moderator for a then-new startup called VideoEgg. Three days
in, a video of an apparent beheading came across his queue._
_“Oh fuck! I 've got a beheading!” he blurted out. A slightly older colleague
in a black hoodie casually turned around in his chair. “Oh,” he said, “which
one?” At that moment Swearingen decided he did not want to become a
connoisseur of beheading videos. “I didn't want to look back and say I became
so blasé to watching people have these really horrible things happen to them
that I'm ironic or jokey about it.”_
[1] [https://www.wired.com/2014/10/content-
moderation/](https://www.wired.com/2014/10/content-moderation/)
~~~
kevinskii
It would be great if there were a way to make content moderation more humane,
but perhaps this is like saying that it would be great if we could make
hospital emergency room work more humane. It is unavoidably traumatic, and ER
staff are known to detach and use dark humor as a coping mechanism. Content
moderation is a similarly noble and difficult profession.
~~~
agentdrtran
ER staff are paid a lot more, are sometimes even in a union, and also get the
benefit of helping people and seeing the effect of their help.
~~~
Spooky23
Ever hang out with an ER staffer?
Most of their workload is bullshit. People with colds and sore throats,
depressing people using ER as primary care, assholes using 911 to score
Medicaid cab vouchers.
And at any time, any number of people can show up with any kind of personal
tragedies from stokes to various traumas. My sister quit after a 13 year old
bled out from a gsw, and she walked out of the room and got kicked in the head
by an prisoner who had been stabbed after he bit the ears off of three other
prisoners, and broke the arm of a guard.
~~~
kevinskii
Bite one inmate's ear...shame on you.
Bite a 2nd inmate's ear...shame on him.
Bite a 3rd inmate's ear...that's fricking badass.
~~~
dang
Please don't do this here.
------
CM30
It's probably a bit controversial to say this, but haven't large sites had to
deal with these problems for years by the time the likes of
Facebook/Twitter/YouTube/Instagram/whatever came around?
It's obviously horrible doing this as a job and it's obviously a bit more
common on Facebook than say, a large old school internet forum, but... at
least in this case people are paid to do this. Reddit mods and old school
forum mods have to deal with this stuff for free.
~~~
tmpz22
Right but the difference is the sheer amount of money involved and the
politicization that comes with that. If Facebook was worth 100m nobody would
care. If facebook only has 10m users nobody would care. It's the scale that
completely breaks the legal and political frameworks the our society leans on.
~~~
wheelie_boy
I'd say another difference is the percentage of your attention is being
devoted to this kind of thing. Presumably reddit or phpbb mods aren't spending
40 hr/wk looking at gore.
~~~
Nasrudith
Wouldn't that suggest then that the "ideal" way to do it is a very part time
job then with high volume? Like say 5 hours a week. It would be a scheduling
and logistical nightmare of course but it would give more "dispersion time"
for trauma.
~~~
wheelie_boy
I'd be interested to see if the tooling around moderation could also be
improved. For example, could the images start heavily blurred, with a circle
of clarity that opens when you click. Or something like that, which would harm
throughput, but be more humane.
~~~
prawn
This is a very interesting idea. It could reveal only a portion and see how
often that was enough for the moderator to pass judgement.
------
MattGaiser
They should recruit people on Reddit for this. A large number of them seem
perfectly fine with this type of content.
There are dedicated communities to all these things.
I do not mean this as a hit against Reddit users. It is just that there are
small pockets of people who are psychologically capable of tolerating this or
even enjoy it. It seems more ethical to hire people from /r/watchpeopledie.
~~~
kylecazar
I'd wonder if frequents in these subreddits are typically professional and
healthy people to have on a team, though (honestly no idea, never been).
It's a strange conundrum because ideally you want good people who are
comfortable watching things good people rarely want to watch.
Don't have much experience with this demographic -- maybe there are more sane,
emotionally healthy people watching snuff films for fun than I instinctively
expect.
~~~
teddyh
> _maybe there are more sane, emotionally healthy people watching snuff films
> for fun than I instinctively expect._
Probably not, since there are no documented instances of any snuff films ever
found.
~~~
justanotheranon
[https://consortiumnews.com/2019/07/11/the-revelations-of-
wik...](https://consortiumnews.com/2019/07/11/the-revelations-of-wikileaks-
no-4-the-haunting-case-of-a-belgian-child-killer-and-how-wikileaks-helped-
crack-it/)
Thanks to Wikileaks publishing the Dutroux Dossier in 2008, we know hundreds
of snuff films were recovered from the notorious pedo-rapist Marc Dutroux.
Just because the police keep those films as sealed evidence that are never
published does NOT mean snuff films are an Urban Legend. I can see why the
police always keep snuff films secret. Imagine what would happen if snuff
films were uploaded all over the Internet? There would be mobs of angry
villagers armed with torches and pitchforks descending on jails and prisons to
lynch pedos. It would make harassment from QANONs look like school yard
bullying.
~~~
teddyh
Snuff films are, fortunately, still only a subject of urban legends, since
those films you mention do not meet the definition. Were those films made _in
order to entertain other people_ than the person who made them? Were the films
ever _distributed_ to any of those people? Was anyone ever actually killed
_in_ any of these movies? If the answer to _any one_ of those three questions
is “no”, then the movies, horrible as they may be, were _not_ “snuff” films,
as commonly defined. From what I can see in your reference, the first _two_ of
those things are decidedly _not_ true, and the reference is unclear regarding
the third.
(The films were, according to your reference, made only for _blackmail_
purposes, and therefore certainly not made for enjoyment of any viewer, nor
were they ever distributed to other people for their enjoyment. The reference
does claim that people were killed in a specific place which was also filmed,
but does not, what I can see, explicitly state that any murders were actually
filmed, other than incorrectly calling all the films “snuff” films.)
~~~
shadowprofile77
I think that some of you should take a look at news websites about the drug
and kidnapping cartels down in Latin America... They regularly post their own
snuff films online and I can guarantee you that they're not only very real but
also extraordinarily brutal. No urban legend about it.
------
intopieces
Every time you use Facebook, see an advertisement there, or click it, every
time you share content on Facebook and get others to engage with that
platform, you are contributing to a platform that is directly responsible for
human psychological harm, in many different ways.
Same for Twitter, and for Reddit, and Instagram... and probably TikTok.
I don’t believe the use of these platforms can be considered ethical.
~~~
Kiro
You are commenting on one of those platforms right now.
~~~
intopieces
HackerNews is a limited-focus link aggregation and comment platform. It
doesn't require the kind of moderation that larger scale, broad-focus social
media platforms do.
I don't envy the work that Dan and Scott have to do in the slightest, but I
don't think they'll end up with PTSD from it. At least, that's what I gathered
when I read "The Lonely Work of Moderating Hacker News"[0], especially this
description of it: "Pressed to describe Hacker News, they do so by means of
extravagant, sometimes tender metaphors: the site is a “social ecosystem,” a
“hall of mirrors,” a “public park or garden,” a “fractal tree."
[0][http://archive.is/bbzan](http://archive.is/bbzan)
~~~
twblalock
So you think the only ethical sites are the ones that operate a small scale?
~~~
intopieces
I can't say for sure, since I don't know every site. But I would posit that
ethical operation is inversely correlated with user base size.
------
irrational
I'm surprised there hasn't been a revenge movie where one of these moderators
use their access to find out where these people live and start hunting them
down like some sort of batman-like person. I could see a movie like this being
a means of making the more general population aware of the terrible stuff
content moderators have to watch.
~~~
slg
This is totally not what you are talking about, but the closest thing I have
seen to that is the Netflix docuseries Don't Fuck With Cats. It hits on a lot
of the same topics. The primary differences are that the hunters are Facebook
users and not moderators and they are hunting the person to get justice and
not violent revenge.
~~~
Tiksi
There's a show called Darknet
[https://en.wikipedia.org/wiki/Darknet_(TV_series)](https://en.wikipedia.org/wiki/Darknet_\(TV_series\))
that's somewhat along these lines too, but it's very much in the horror genre.
------
msapaydin
What I am wondering, and this is probably a dumb question, is why this has not
been automatized? Can't content moderation be done with modern and strong
machine learning based systems? There must be plenty of training data on this,
and just like a spam filter which does not require humans in most cases, this
should also be automatable. Why is it not?
~~~
nitwit005
It has been. Most of these sites catch a ton of images and video automatically
when they are similar enough to prior known content.
That doesn't matter from a staff point of view though. You have a queue to
work through. You'll be putting in an 8 hour day dealing with the stuff the
system doesn't catch. The automation just means they don't need as much staff.
~~~
three_seagrass
Yep. One way to view automation with machine vision / perception is that it
can cover ~85-95% of the true positives.
You're still going to get false positives and false negatives that need human
review, and at a scale of Facebook, that's a lot of humans.
~~~
msapaydin
I am just hoping that those currently uncovered cases will be "milder or more
nuanced" cases that will be less damaging to the psyche of human moderators
and will, once labeled correctly, improve the coverage rate of automated
moderators.
------
weresquirrel
If you haven’t seen The Cleaners, I highly recommend it.
[https://thoughtmaybe.com/the-cleaners/](https://thoughtmaybe.com/the-
cleaners/)
------
shadowgovt
I can't help but wonder what the numbers look like on this content. One of the
things Facebook should have enough data to know is how prevalent this sort of
thing is.
Is deeply offensive content generated by a handful of users frequently, or
many users less often? What's the volume look like?
~~~
filoleg
Agreed. I would definitely love to see some sort of a basic data analysis blog
post on this topic from FB, similar to how match.com used to publish data
analysis posts on their blog back in the day.
------
op03
Given auto translate works reasonably well these days, how much commonality
does Negative Content over different regions and languages have?
Anyone know? Is it like 10%? 80%?
There must be, by now, a whole lot of data on whats getting flagged
region/language wise.
Maybe we need a Cloudflare for Content.
~~~
latchkey
Depends on the language. In my experience Vietnamese -> English is horrid and
comes out as mostly garbage. Interestingly, the reverse has been better from
what my Vietnamese friends tell me.
------
ehnto
There are some things I will explicitly avoid in projects I'm working on. One
of them is allowing picture or video uploads to users if the service is free.
There's no low-risk, cost effective solution for moderating it, and in my
country we also don't have safe-harbour type laws, so any content on the
platform is your responsibility.
~~~
jcun4128
I had considered using services like Google Cloud Vision or other services for
explicit image detection anyway.
Most websites it seems when you upload an image it's immediately available, I
always wonder if it goes through some basic moderation system or just waits to
get reported.
I've done tagging jobs on MTurk before it's weird... seeing random people's
images.
~~~
ehnto
I would wager most wait for things to get reported rather than be pre-emptive
about it. I wonder if MTurk could ever be fast enough for an approval process
on something like Instagram.
I think vetting users rather than content is probably the most efficient
method of community curation though. If you have an approval process or other
ways to validate that users aren't nefarious then you would reduce your
workload by a bunch. Most bad images seem to come from burner accounts.
Unfortunately the only thing that comes to mind is a social credit as a
service style system and maybe that's not where we want to steer this ship...
~~~
jcun4128
Yeah the tagging jobs I did were weird like "find the baby" haha... And other
obvious tagging videos.
I can get from the user side about the burner accounts. I feel a physical
response when I get denied to post a first post say in a sub and I'm like
"Excuse me..." But yeah that's definitely discouraging/limiting to stick
around before contributing.
------
Havoc
It also sounds (from other articles) like people are seeing specific clips
multiple times. Surely a bit of semi-AI filtering should be able to blacklist
the regular stuff
------
praveen9920
The issue is as real as western countries dumping garbage in eastern countries
just because they can pay them off.
The local governments won't acknowledge this as problem because of the money
flowing in and a lot of people's livelihood is dependent on this. But we
should call it what it is, exploitation.
~~~
quadrifoliate
I think in regards to India, the issue is a little more complex, and
intertwined with social mores. This one sentence from the article is really
important:
> According to Dr. K. Jyothirmayi, a Hyderabad-based psychiatrist, stigmas,
> such as the perceived impact of a mental health diagnosis on one’s marital
> prospects, often prevent young Indians from seeking treatment.
From growing up in India, my view is that meditation and the general state of
the mind has been taken seriously for, well, centuries. But move the focus
towards anything with 'psych' or 'mental' in the title and people in India
will shy away from it _even if it is affordable or free_. "Seeing a mental
health counselor" is a recipe for considerable social stigma in India [1], not
an empowering practice like it's perceived as in the United States.
The solution to this is hard, and in my opinion not something that Facebook or
their Indian subcontractors can accomplish.
\-----------
[1] I'm talking "All your relatives will refer to you as the crazy person" or
"Your girlfriend's parents will call off the engagement" levels of stigma, not
just "I don't feel comfortable talking about it in a bar" levels. Like a lot
of other things, families with higher levels of income and education are
_sometimes_ an exception.
------
ericjang
I read somewhere long ago that there have been incidents of FBI employees
tasked with reviewing child pornography [1] who saw so much that they became
desensitized or even aroused by it. Does anyone know if this is well-
documented, or was this an exception rather than a common occurence?
[1] [https://www.fbi.gov/history/famous-cases/operation-
innocent-...](https://www.fbi.gov/history/famous-cases/operation-innocent-
images)
------
ed25519FUUU
I can't even barely read this article before becoming extremely sad for these
people and the individuals (especially children) who were abused in these
videos. I hope everyone involve gets the healing they need.
It's not the flat-earther and somesuch conspirary communities that worry me.
It's the people who share and get enjoyment from this kind of content, and the
networks which enable it.
------
maerF0x0
I cant help wonder about the economic(choices) side of this around 1) Should
this victimization be taken into account when punishing offenders eventually
tried and 2) Is the damage done to the content moderators marginally worth
keeping the marginally caught content off the platform (recall there already
is automation doing the bulk of the work)...
I do think we need to think about the side effects of laws where some US
citizen demands that Facebook must employ content moderations else be liable
for hosting it, it doesnt result in Facebook executives, engineers et al.
bearing the burden. It results in some poor person in a 3rd world country
bearing the burden.
Kind of like the environmental aspect of things I would think the best
investment is in total prevention of the heinous acts in the first place. ie,
it's much cheaper to prevent CO2 than it is to clean it up after the fact.
------
totetsu
Is this a case of trying to solve social problems with technical solutions?
Maybe a platform like FB that is not run by the community that uses it simply
cannot regulate users behaviour like a real social community can.
------
lowmemcpu
I recall a statistic from about 10 years ago that computer forensic
investigators in law enforcement burn out after two years due to the trauma of
the images they are exposed to.
~~~
throwaway0a5e
There's likely some confounding factors. Pressing "go" on the overpriced
software tools and then entering into evidence what you find is the lowest
level of work in that field so the churn is going to naturally be very high as
people move up or out. The pay also isn't that great.
~~~
mschuster91
No, that is not the issue. Rather the issue is that even the hardest stuff on
Facebook isn't remotely comparable to stuff of actual criminals, and the
effort is wildly different:
\- Facebook: it's violating rules? Delete, next.
\- Forensic IT on a multi TB disk _full_ with child porn: document _every_
photo, what it shows, extract identifiable faces to cross reference with other
content (to check for recurring places and victims), and the process is even
more gory for video content. You have to watch every second or the defense can
attempt "you didn't watch the video in full where the perp gives the victim an
ice cream at the end" or whatever else. The amount of time you spend with
documenting a single photo or video is many orders of magnitude worse than FB
content mods.
~~~
henrygrew
This sounds very grevious, it's sad that a human being has to do this work
------
anonu
Can we build better AI off of the data these moderators have generated to make
their lives easier?
Can you crowdsource the moderation task to double check what the ai is
flagging?
~~~
uniqueid
> Can we build better AI
Nope. That's what the people in charge of our social media companies keep
trying, over and over, and it doesn't work at all. AI-moderation is like the
Ring is to Golem for them. They can't accept that "cheap and fair" moderation
isn't currently possible.
> Can you crowdsource the moderation task
This is the clear answer, imo, and it isn't obvious to me why these C-level
execs avoid it. If I had to make a guess, maybe they think engaging with
millions of their users about moderation would open a can of worms.
~~~
shadowgovt
When the goal is to prevent users from seeing this content, crowd-sourcing
moderation defeats the purpose.
Users don't want the back-stop to be "a critical mass of community members
object;" for content like this, they want to see zero of it, ever, and will
choose to use another site that satisfies that hard constraint if FB cannot.
~~~
uniqueid
> they want to see zero of it, ever
Many _want_ that, but very few (probably nobody at all, by this point)
_expect_ it.
> When the goal is to prevent users from
> seeing this content, crowd-sourcing moderation
> defeats the purpose.
It's not actually all-or-nothing, and if it were, it would defeat it no less
frequently than paid moderation already does. People tolerated toxic internet
content before social media not because they never encountered any, but
because they encountered much less.
No system can ever stop 100% of users from seeing 100% of the material they
find objectionable. Every user has their own idea of what material is beyond
the pale, and even if we invent AI that is competent at blocking content, and
capable of tailoring its filters per-user, it will still fail some of the
time, because users are people, and people's tastes change over time.
Aside from that, the way Youtube, Reddit, Twitter et al, currently operate, a
user who reports objectionable content typically waits days, weeks, or years
(or forever) for the company to take action. If you give a user a little
_real_ agency, it goes a long way to mitigate their displeasure over
occasional exposure to unwanted content.
~~~
shadowgovt
What real agency does the system you propose offer? Because one down-
moderation from a volunteer moderator is a drop in the bucket.
Meanwhile, if your system requires users to view beheading videos regularly,
people will just migrate to a site that doesn't require that.
~~~
uniqueid
If I had to come up with a system, it would be some sort of reputation
hierarchy, with a small number of paid employees at the top. Each level is
responsible for auditing the levels under them.
> What real agency does the system you propose offer?
The ability for millions of users to moderate themselves. So... fast response
time, by opinionated, emotionally-invested moderators, as opposed to the
status quo: slow response time by paid burn-outs following a flow-chart.
> if your system requires users to view beheading videos regularly
I'm pretty sure that content would be reported faster than anything else. To
be fair, another part of the issue is ban-evasion using multiple accounts, and
that indeed requires additional measures to handle. Sadly, there's a
disincentive to dealing with fake accounts, because trolls count as "sign-ups"
too.
> people will just migrate to a site that doesn't require that.
Like Youtube and Facebook still occassionally do. Well, most users haven't
abandoned them yet, so there goes that theory :)
~~~
shadowgovt
> occasionally
Precisely. How often? Close to never.
~~~
uniqueid
Give or take a live-streamed Christchurch massacre?
~~~
shadowgovt
Yes. One disastrous livestream in the history of the feature, with further
controls added almost immediately as a result.
~~~
uniqueid
Neither Youtube nor Facebook currently are able to filter _all_ these
incidents. This happened _after_ Christchurch:
[https://www.bangkokpost.com/thailand/general/1853804/mass-
sh...](https://www.bangkokpost.com/thailand/general/1853804/mass-shooter-
killed-at-korat-mall-20-dead)
I could swear there was also a Christchurch copy-cat attack in Europe, but I
can't remember sufficient details about it to find an article. Perhaps that
one wasn't streamed on FB.
I don't have encyclopedic knowledge of Facebook atrocities (I closed my
account nearly a decade ago), but without vetting content before it goes live,
I don't see how they will entirely prevent these videos from reaching _some_
users.
~~~
shadowgovt
I don't know that _all_ is realistic. It's the desired unattainable goal.
_Almost all_ is the current status quo. If a service like FB were to adopt a
volunteer moderator model, that track record would crash by definition because
the moderators would be seeing the garbage.
(That's before we factor in unintended consequences such as the risk that a
critical mass of moderators decide the garbage is signal and start passing it.
It's more risk than FB wants to take on for a problem they already solve via
paid employees).
------
zacharycohn
About 9 years ago, I found myself at a party consisting mostly of content
moderators for The Cheezburger Network.
I walked out of there shellshocked.
------
Thorentis
Only a human should be able to decide what other humans can and cannot see. A
future where computers are responsible for "deciding" what information we
consume about the world is not one I want to live in. Computers are already
being used as tools of censorship, we don't need it to be expanded further.
~~~
ece
If computers can help with spam, they can certainly help with other well-
defined and well-audited types of illegal and misinformed content.
It would be worse to not invest in more automation, even malpractice.
------
afaq404alam
I hope they are tagging these videos and someone somewhere is working on a
neural network to classify the new ones.
------
alecco
Funny how they can automatically flag copyright for videos now even when x
mirrored but they can't identify these videos.
I can't believe there's so much original content for this. Maybe they could
share a db of hashes of known bad videos across sites and government agencies.
------
zitterbewegung
This is also relevant:
[https://www.theguardian.com/technology/2017/jan/11/microsoft...](https://www.theguardian.com/technology/2017/jan/11/microsoft-
employees-child-abuse-lawsuit-ptsd)
------
Magodo
I don't understand what's done with the contractors in the US after the
successful lawsuit, surely all those jobs were just moved offshore? There's no
mention of this at all in the article....
------
amelius
Perhaps it's a solution to use brainwave headsets. So whenever the viewer
would get too much negative stimulation, the video would stop and the viewer
could take a break.
~~~
droopyEyelids
The headset could also put a mark into their performance review for failing to
develop a sufficient coping strategy.
------
shadowgovt
This story is an interesting contrast to the story of Twitter dropping users
for spreading the QAnon conspiracy.
Clearly, there's near-universal agreement that _some_ moderation is fine; only
a handful of (downvoted) comments on this thread saying "The easiest way to
fix this problem would be for Facebook to stop censoring posts and let users
block what they don't want to see." But we clearly don't think that's a good
solution in this case.
The question of whether QAnon should also be blocked is one of degree, not
quality.
------
paulpauper
how do you become a Facebook moderator?
------
atlgator
Is Facebook worth it? Is it worth the trauma?
~~~
paulpauper
if it wasn't worth it, people would not do such jobs and people would not use
Facebook, so apparently for a lot of people it is worth it.
------
fareesh
Some great undercover video of what goes on at Cognizant here, as far as bias
in moderating actions is concerned:
[https://www.politicalite.com/latest/facebook-employee-if-
som...](https://www.politicalite.com/latest/facebook-employee-if-someone-is-
wearing-a-maga-hat-i-am-going-to-delete-them-for-terrorism/)
In this video it is shown that regarding beheadings, Facebook made it a point
to send their moderation contractors a memo that the image that was frequently
reported of President Trump's face with a knife at the neck where he is being
beheaded was an exception to the content policy on
beheadings/violence/incitement because it was considered to be "art" by some
museum somewhere in Portland - which from what I can see on TV - appears to be
an extremely volatile and violent place in some parts.
There is a reference to a cartoon post of Elmer Fudd shooting a cartoon gun at
Beto O'Rourke with the text "I'm here for your guns" at the top - which was
not given any similar exception.
I don't think Facebook or their partners are serious about this kind of thing
at all since they seem ok with advocating violence and gore when it resonates
with some personal opinions.
Also noteworthy is that the name of this whistleblower garners fewer search
results on Google, than the name of another whistleblower Eric Ciaramella who,
if you uttered a-la Voldemort, would get you banned instantly.
Broadly, content policy at these websites seems to be a lord of the flies,
unprincipled hackjob of whatever the political machinery at the organization
deems worthy of steering society in the direction of their whims. Given the
influence and power of Facebook and other tech platforms, this should worry
people. I'm less inclined to be sympathetic to these folks and the difficulty
of their jobs, given the apparent willingness to become social
constructionists with the power that they wield so irresponsibly.
------
jbrennan
In my opinion this is an argument in favour of shutting Facebook (and every
other large social network) down. This sort of work is damaging and abusive,
and nobody should have to endure it just so we can have social networks.
I understand there are some jobs in the world that need to deal with dark
stuff (like law enforcement), but social networks just aren’t worth the human
cost.
~~~
dangus
That’s a non-solution.
What you’re proposing is not just shutting down social networks, it’s shutting
down any website that involves user content, anything that allows photo/video
upload, comments, or any kind of user interaction. That’s impossible.
You point out that public safety jobs are view more “worth it,” and certainly
they are, but that logic brings up the question of who judges what job is
worth undergoing trauma.
In other words, is a subway or freight train driver’s job “worth it,” if they
have to see someone commit suicide on the tracks? What about crime scene
cleanup companies? Funeral services? Bus drivers? Truck drivers? Nobody’s
going to agree on where to draw the line in the sand.
A more realistic solution might be to make comprehensive support systems,
mental health resources, and treatment a legally mandated, completely free
service provided to any employee that works in these kinds of fields.
Finally, I think there are most certainly people out there who are not as
sensitive and affected by this content who would be candidates for these kinds
of roles. Perhaps there’s a way to test for that sensitivity before the real
job starts.
~~~
jbrennan
I’m actually not proposing shutting down any website that allows uploaded
content, just large / public sites that require this sort of moderation. Not
every site gets this stuff uploaded to it. The more private the network, the
less need for this kind of company-led moderation.
As far as “worth it” goes, some people have to be exposed to it so long as we
have law enforcement (but I’m certainly open to alternatives here). I’m not
sure the train operator is a fair comparison, because seeing a suicide is an
exceptional circumstance in their job, it’s not the norm. The content
moderators, however, are sadly expected to be exposed to traumatizing content
as part of their job description — it’s essentially the point of their job.
There are plenty of kinds of work we deem as hazardous to people’s health, and
thus are either banned or regulated. I’m not sure if there’s a healthy way to
expose people in these moderator jobs to the traumatizing content they face.
It just doesn’t seem worth the tradeoff to endanger them like this.
~~~
dangus
Think like a legislator. How do you write this regulation?
> [shut down] just large / public sites that require this sort of moderation
Let’s say I start a restaurant review website that allows comments and photos
to be uploaded. It does modest business for a while, I now have 50 employees.
I’m following the law because my site isn’t big enough to violate this “no
user content for big prominent websites” law.
Soon, it becomes big, like a major competitor to Yelp, and I’ve got 1,000
employees. But suddenly, this new law kicks in that says that I have to stop
accepting uploads because my site is too high profile. Now, I lay everyone off
and go out of business.
This just isn’t a workable solution, at least not in the particular way you’re
proposing it be constructed.
And really, you’re asking the second largest advertiser on the web (Facebook),
a Fortune 50 company, to just pack up its bags and shut down.
It’s not like I love Facebook or anything, but I’m sure their 45,000 employees
wouldn’t be happy about that.
------
pantaloony
Radical idea: if your service requires subjecting lots and lots of workers to
scarring images, the correct solution is for that service not to exist.
Facebook doesn’t have to let people post material to their site with very low
barriers to entry. Granted they have to if they want to continue to be what
they are and to make tons of money... but maybe they shouldn’t continue to be
what they are and make tons of money if this kind of abuse is necessarily
coupled to those outcomes.
“Well this sucks but we can’t keep operating if we don’t do it”. Well... sure
but the solution is _right there_ in that statement. Don’t keep operating.
~~~
rosywoozlechan
Any place anyone can add any user content has this problem. Anything with an
input box and a file upload would qualify as your not existing solution.
~~~
pantaloony
1) that seems basically fine, but also 2) non-commercial efforts run by
hobbyists and gated from the general public ought to be Ok. If you want to run
a PHPBB site and subject _yourself_ to harmful garbage by letting randos write
to your server, well, go nuts.
[edit] thought experiment: how many of the people making tons of money off
Facebook—c-suite, major shareholders—would find some other way to make money
if continuing to make Facebook mega bucks meant _they_ had to do this 5 days a
week? What would we think of any of them who chose “bring on the trauma, I
want those sweet greenbacks” and kept it up for _years_?
|
{
"pile_set_name": "HackerNews"
}
|
Blue Eyes Logic Puzzle - ZeljkoS
http://www.math.ucla.edu/~tao/blue.html
======
Mindless2112
Edit: it looks like I'm wrong.
The first argument is true; the second has the logical flaw. The flaw is
assuming that induction can continue despite additional pre-knowledge
available when there are greater numbers of blue-eyed people.
The statement will have no effect when the number of blue-eyed people is 3 or
more:
When the number of blue-eyed people is 0, the foreigner is lying, and if the
tribe believes him, everyone commits ritual suicide.
When the number of blue-eyed people is 1, the blue-eyed person did not know
there were any blue-eyed people in the tribe. Knowledge is added by the
statement, and the blue-eyed person commits ritual suicide.
When the number of blue-eyed people is 2, the blue-eyed people knew that there
was a blue-eyed person but did not know that the blue-eyed person knew that
the tribe had any blue-eyed people. Knowledge is added by the statement, and
the blue-eyed people commit ritual suicide.
When the number of blue-eyed people is 3 or more, the blue-eyed people knew
that there were blue-eyed people and knew that the blue-eyed people knew that
there were blue-eyed people. No knowledge is added by the statement, and no
one commits ritual suicide.
Edit: clarity.
~~~
MereInterest
If n=3, then each of the blue-eyed people know that there are blue-eyed people
and know that the other blue-eyed people know. However, they don't know that
all the blue-eyed people know that the blue-eyed people know. This is the
piece of information that is learned by the statement given.
In general, if there are N blue-eyed people, then it is the Nth abstraction of
"he knows that I know that he knows that I know that..." that is learned by
the statement.
~~~
voyou
"they don't know that all the blue-eyed people know that the blue-eyed people
know"
Yes, they do. In the three-person case, a blue-eyed person can see two other
blue-eyed people, A and B, and they know that A can see B, and vice-versa, so
they know that A and B both know that there are blue-eyed people, and they
know that both A and B would be able to us the same logic they used, so they
also know that A and B know that A and B know that there are blue-eyed people.
------
DanielStraight
Randall Munroe has a much more thorough write-up on (a variation on) this
puzzle:
[http://xkcd.com/solution.html](http://xkcd.com/solution.html) (Though you
might want to click his link to the problem description first since his is a
variation.)
~~~
md224
And for an even more detailed examination of the underlying philosophical
issues:
[http://en.wikipedia.org/wiki/Common_knowledge_(logic)](http://en.wikipedia.org/wiki/Common_knowledge_\(logic\))
[http://plato.stanford.edu/entries/common-
knowledge/](http://plato.stanford.edu/entries/common-knowledge/)
~~~
bonobo
I understand the induction steps, but what I don't get is why the foreigner's
statement triggers the logic induction. This quote from your first link sums
it well:
What's most interesting about this scenario is that, for k > 1, the outsider
is only telling the island citizens what they already know: that there
are blue-eyed people among them. However, before this fact is announced,
the fact is not common knowledge.
It seems natural to me why they didn't commit the suicide before the statement
(somehow induction doesn't work here), and why they did it after the
statement, but I don't understand why. Isn't the fact that there are k > 1
islanders with blue eyes _common knowledge_ too?
I mean, what bit of information is added here?
~~~
matchu
What's added is the common knowledge that everyone _else_ knows that Blue > 1
(including the foreigner), and that everyone else knows that everyone else
knows that Blue > 1, etc.
Consider two blue-eyed people, Alice and Bob. Alice sees Bob's blue eyes and
knows that Blue ≥ 1, and vice-versa. But Alice thinks, "What if I have brown
eyes? In that case, Bob wouldn't know that Blue ≥ 1." So, everyone knows that
Blue ≥ 1, but nobody knows that everyone knows that. Then the foreigner comes
along and tells them that, between Alice and Bob, Blue ≥ 1. Now Alice knows
that Bob knows that Blue ≥ 1, and realizes that, if Alice has non-blue eyes,
Bob will use the new information that Blue ≥ 1 to conclude that he has blue
eyes, and therefore commit suicide on the next day. When he doesn't, she
concludes that she must have blue eyes, and commits suicide. Bob goes through
the exact same logic as Alice.
Though the foreigner's statement did not tell Alice anything new about eye
color distribution, it _did_ tell her something about Bob's knowledge. The
same goes for Bob, who learns about Alice's knowledge.
The logic is a bit more difficult to talk through with three people, so
generalizing it further is left as an exercise ;P It comes down to "everyone
knows that everyone knows that Blue ≥ 1", which the foreigner _also_
contributes as common knowledge by making his announcement. For more blue-eyed
people, recur on that statement as many times as necessary.
~~~
matchu
Small clarification: For the two-person case, it's important that everyone
knows that everyone knows that Blue ≥ 1. For the three-person case, it's
important that everyone knows that everyone knows that everyone knows that
Blue ≥ 1. (The last paragraph is a bit unclear and/or wrong.)
------
hornd
I think the foreigners statement is ambiguous enough to render the proof in
argument 2 incorrect. "... another blue-eyed person", to me, implies a
singular person in the tribe has blue eyes. All of tribespeople will see
multiple tribespeople with blue eyes, and therefore assume the foreigners
statement was wrong, rendering no effect.
~~~
Mindless2112
It's not so much that the statement was ambiguous or wrong -- it's that the
statement gave no one in the tribe more information than was previously
available to him/her.
Everyone in the tribe already observes at least 99 members with blue eyes, and
everyone knows that everyone else observes at least 99 members with blue eyes,
so the statement should have no effect.
Edit: split to separate comment.
~~~
dllthomas
But they don't know "everyone knew on day X" so there is ambiguity about when
they would be counting from, so nothing can be deduced. The foreigner's
statement provides a synchronization point: clearly everyone knew at that
point.
------
dnautics
> Now suppose inductively that n is larger than 1. _Each blue-eyed person will
> reason as follows:_ “If I am not blue-eyed, then there will only be n-1
> blue-eyed people on this island, and so they will all commit suicide n-1
> days after the traveler’s address”.
Why should not a brown-eyed person reason as follows as well? It is at this
stage that an implicit "counting" of the blue-eyed population creeps into the
flawed proof.
EDIT: I misidentified the place where the flaw comes in. Will repost a better
explanation.
~~~
tbrake
They would. The traveller has accidentally doomed them all.
edit: this is based on an unfounded assumption because of the wording of the
problem - they may only know they don't have blue eyes. But when I read "100
have blue eyes and 900 have brown" it makes it sound binary and I assumed
that's knowledge the tribes people have as well, i.e. we have only either blue
or brown eyes.
~~~
thedufer
In Randall Munroe's version of the problem, he gives the visitor red eyes to
explicitly avoid this binary assumption. Obviously, the visitor's statement is
slightly different because of this.
------
aolol
Taking the 'dramatic effect' argument further: if all the blue-eyed people
kill themselves, would the brown-eyed people all simultaneously know they have
brown eyes and also have to kill themselves?
~~~
Jehar
This is the missing element from most explanations I see of this problem. We
all look at the n cases from the POV of a blue-eyed person. An outside
observer with brown eyes has the same level of information available to him,
so it seems to me just as likely that after the first day, each person,
regardless of eye color, could reason that nobody left the previous day, so
the visitor must have been referring to me. So either eventually everyone
dies, or they all realize the paradox and forgo the ritual.
~~~
Mindless2112
Each blue-eyed person observes 99 other blue-eyed people in the tribe, thus
reasoning that he/she has blue eyes on the 100th day. However, each brown-eyed
person observes 100 blue-eyed people in the tribe, thus reasoning that he/she
has blue eyes on the 101st day (however this does not happen because on the
100th day all the blue-eyed people commit ritual suicide.)
~~~
IanDrake
Doesn't that presuppose they know there are 100 blue eyed people in their
tribe? When that information is presented to the reader, it's presented as
outside knowledge.
If they knew the color counts, they would know their eye color and all would
have to commit suicide. The fact that the tribe still existed means, they
didn't know the totals.
For example, if I know there are 100 people with blue eyes and I can count as
many without including myself, then I must have brown eyes and must kill
myself.
So again, there is no possible way the tribe had any idea what the _exact_
counts where.
As a brown eyed person, there are either 100 blue eyed people meaning I have
brown eyes, _or_ there are a 101 blue eyed people and I have blue eyes. If a
census was ever taken and the exact number known everyone would have to commit
suicide.
Since the visitor didn't mention an exact number then there is still no way to
know if you have blue or brown eyes.
However, the tribe now knows that the visitor knows he himself has blue eyes.
Will they make him follow their ritual?
Update: OK, after reading the link in the first comment, I _get it_.
~~~
Mindless2112
It is not necessary for the blue-eyed people to know the total number of blue-
eyed people in the tribe, they can deduce it at day 100:
* A blue-eyed person observes 99 blue-eyed people.
* On day 99, the blue-eyed people do not commit ritual
suicide.
* Thus each blue-eyed person learns that all the blue-eyed
people also observe 99 blue-eyed people.
* Thus the blue-eyed person knows that the other blue-eyed
people must observe that he/she has blue eyes.
------
vytasgd
The traveler has no effect. The logical flaw is that with n > 2 blue eyed
people, everybody knows that there is at least 1 blue eyed person AND
everybody knows that everybody ELSE knows that there is at least 1 blue eyed
person.
The traveler's comments would only have an effect with n<=2 blue eyed people.
With n = 1, he'd instantly know. With n = 2, the 2nd blue eyed person would
recognize that the first person now has the information and if he doesn't
commit suicide on the first night, then the 2nd blue eyed person knows the 1st
blue had the information before, meaning he saw somebody else, and then they
both die on night 2. n > 2, the info is already out that blue-eyed ppl exist
and the count has already started.
------
Symmetry
So, what the visitor is providing is really the coordination, the point at
which you can measure 100 or 99 days. But doesn't this setup require that
there have always been 100 blue eyed people since forever? Any birth or death
or all the islanders being crated at once would serve equally well as a timer.
It seems like this problem only works because the blue-eyed islanders all know
that there are 99 other islanders with blue eyes, but there was no moment in
time where they learned it. And since that is so contrary to our expectations,
it's what ends up making the whole scenario seem so unintuitive.
~~~
lisper
No, the key is that the foreigner's statement establishes common knowledge at
some point in time. What happened before that point in time is irrelevant.
> the blue-eyed islanders all know that there are 99 other islanders with blue
> eyes
That's true, but what they don't know (until the 2nd) day) is that all the
other blue-eyed islanders know that all the other blue-eyed islanders know
that there are 99 other blue-eyed islanders. On the 3rd day they will realize
that all the other blue-eyed islanders know that all the other blue eyed
islanders know that all the other blue eyed islanders know that... and so on.
Then on the 100th day there are 100 iterations of recursive knowledge, and all
the blue-eyed islanders realize they themselves must have blue eyes.
UPDATE: note that it is crucial that all the islanders are together when the
foreigner makes his statement. If he goes to each islander individually and
says "some of you have blue eyes" then it doesn't work. What matters is not
the statement, but that all the islanders witness all the other islanders
hearing the statement.
~~~
dllthomas
Strictly, all _blue eyed people_ need to hear the statement, right? If someone
is missing and everyone (but them) knows the missing person has brown eyes,
that doesn't change the logic of those who heard.
~~~
lisper
That's right.
Which suggests some follow-on puzzles:
1\. What happens if one blue-eyed person is somewhere else on the island when
the foreigner makes his statement (and his absence is known to everyone)?
2\. What happens if the next day a blue-eyed stranger wanders into the
village, thereby establishing common knowledge that the day before there was
in fact an additional blue-eyed person on the island (though no one in the
village knew it at the time)?
3\. What happens if the next day a blue-eyed baby is born in the village?
~~~
thaumasiotes
1\. Suppose the foreigner makes his statement to a group of islanders _C_
("contaminated"), and the rest of the islanders _P_ ("pure") do not hear it,
and it is known to all that they didn't hear it. Call the group of blue-eyed
people _B_. Then the intersection of _C_ with _B_ will kill themselves after a
number of days equal to the size of that group.
2\. Nothing. (I interpreted this as being without a statement by a foreigner.)
3\. Nothing. (I also interpreted this one as being without a statement by a
foreigner. With such a statement, it's the same problem as case 1; everyone
will recognize that the baby, having not existed on Foreigner Day, can't know
about nor have been mentioned in the statement.)
EDIT:
I should point out that I've assumed the foreigner's statement refers to the
group he's addressing, not to the population of the island. ("At least one of
you who I see before me has blue eyes".)
With a better interpretation of your problem 2:
2a. On some day, the foreigner addresses a village, saying "at least one
person on the island has blue eyes". A blue-eyed stranger wanders into the
village shortly after he leaves, allowing the villagers to believe that he was
referring to the stranger.
In this case, there is no synchronization point, and "nothing" will still
occur.
2b. A blue-eyed stranger wanders into the village _the day after_ the
foreigner leaves, allowing the villagers to believe that he was referring to
the stranger.
As far as I can see, this has gone back to case 1 again. The foreigner's
statement provoked a first day of blue-counting, and while it is revealed to
have possibly not meant what they thought it meant, day 1 of blue-counting is
sufficient for day 2. The blue-eyed villagers should kill themselves after a
number of days equal to the size of their group. (The stranger, even if he
settles into the village, will be unaffected.)
------
thaumasiotes
Here is the part that people seem to miss:
> If a tribesperson does discover his or her own eye color, then their
> religion compels them to commit ritual suicide _at noon the following day_
> in the village square for all to witness.
Emphasis mine, of course.
You can think of this clause as specifying the clock speed (one cycle per day)
of the logical machine that is the island.
------
Smaug123
There's an infuriating variant, which I have as yet been unable to solve:
An infinite sequence of people have either blue or brown eyes. They must shout
out a guess as to their own colour of eyes, simultaneously. Is there a way for
them to do it so that only finitely many of them guess incorrectly?
~~~
anonymoushn
And none of them have any knowledge about anything?
~~~
Smaug123
They can all see everyone else's eyes. That is, person N can see person M's
eyes, for all M,N. [I don't know whether it's possible or not - it feels not,
but it has been hinted to me that it is possible.]
------
dllthomas
Personally, I'm convinced the blue-eyed people die on the hundredth day _if
they haven 't earlier_ \- I am not convinced there is no shorter path to the
information (though I certainly don't know of one).
------
informatimago
1- Nothing says that there were 1000 islanders on the day the alien made his
speech. Some may have discovered they eye color and died before.
2- Compare: “how unusual it is to see another blue-eyed person like myself in
this region of the world” with: “how unusual it is to see other blue-eyed
persons like myself in this region of the world”
To me, it is clear that the entire tribe the day the stranger makes his speech
consists of N brown-eyed, and one single remaining blue-eyed.
They immediately understand the same thing, and all commit suicide the next
noon.
------
konceptz
This is a lovely way to teach induction and proofs as part of a class. I
personally prefer the dragon version of this problem. An elegant solution is
provided here.
[https://www.physics.harvard.edu/uploads/files/undergrad/prob...](https://www.physics.harvard.edu/uploads/files/undergrad/probweek/sol2.pdf)
------
lazyant
Shouldn't it be better if people had to commit suicide in their own houses? I
mean if there are 2 people with blue eyes and on the second day they see each
other at the town square about to commit seppuko they'd figure perhaps the
other is the only one with blue eyes. Or this whole thing went over my head.
------
spongerbakula
Ok, I'm pretty sure I'm being moronic and missing something, but does the
foreigner give the tribe any more information? Does the tribe already know how
many blue eyed and brown eyed people there are?
~~~
miahi
If they already knew how many of each, then they would all be dead, as any of
them would see that the numbers add up only if he was in a specific group.
------
DannoHung
I think the problem also is missing some part where the people know for sure
that they can only have blue or brown eyes.
------
elwell
Well let's try the experiment a few times and see if our results match up with
our theories in double-blind tests.
~~~
thaumasiotes
The problem statement specifies that the population of the island all reason
with perfect logic. Compare that to the reasoning displayed by my babysitter's
son once:
Me: I saw my parents wrapping a Christmas present, but on Christmas
when I received that present, it was labeled "from Santa".
Him: Santa Claus is real.
Good luck finding a suitable population to experiment on. ;)
|
{
"pile_set_name": "HackerNews"
}
|
Ask HN: My first Startup Weekend - 404error
Hello all,<p>I work for a small newspaper on the Central Coast of California. Santa Maria, Ca to be exact.<p>Someone has organized the first ever Start-up Weekend in my city (that I'm aware of). Since joining the hacker news community I have always wanted to attend one, but my geographic location and the feeling that my skills are sub-par has always stopped me. Now that this event is in my backyard, I don't have an excuse to not attend.<p>Being in an Agricultural/ Farming community I don't know how big of a tech community there is here.<p>I don't have an idea to pitch, but I am very interested in getting involved in a start-up.<p>Any advice on what to expect, or how to approach the whole event?<p>Thanks in advance.<p>http://santamaria.startupweekend.org/
======
jlengrand
I wrote about this a few months ago : [http://www.lengrand.fr/2012/12/how-we-
won-our-first-startup-...](http://www.lengrand.fr/2012/12/how-we-won-our-
first-startup-weekend/)
Basically, No need to be technical. I would even say that being not technical
is better than being too much. The whole experience is awesome. You meet so
much cool people.
Just go and enjoy, you'll surely get inspired :).
------
orangethirty
Just be yourself. Though you should make an effort to talk to people, shake
hands. Join in group conversations. Buy a beer or two to people you find
interesting. That's pretty much it. Oh, and have fun.
|
{
"pile_set_name": "HackerNews"
}
|
Intel's take on GCC's memcpy implementation - mtdev
http://software.intel.com/en-us/articles/memcpy-performance/
======
wolf550e
This article is old: March 9, 2009 1:00 AM PDT
Nowadays glibc has modern SSE code and the kernel uses "rep movsb". The kernel
can store and restore FPU state if the copy is long and doing SSE/AVX is worth
it. Someone on the Linux kernel mailing list measured that performance depends
on src and dest being 64-byte aligned compared to each other: if they are
aligned, "rep movsb" is faster than SSE.
The thread: <https://lkml.org/lkml/2011/9/1/229>
[http://git.kernel.org/?p=linux/kernel/git/torvalds/linux.git...](http://git.kernel.org/?p=linux/kernel/git/torvalds/linux.git;a=blob;f=arch/x86/lib/memcpy_64.S;hb=HEAD)
[http://sourceware.org/git/?p=glibc.git;a=blob;f=sysdeps/x86_...](http://sourceware.org/git/?p=glibc.git;a=blob;f=sysdeps/x86_64/multiarch/memcpy-
ssse3.S;hb=HEAD)
------
abrahamsen
> the developer communications don't appear on a public list. There is no
> visible public help forum or mail list
<http://dir.gmane.org/index.php?prefix=gmane.comp.lib.glibc>
Seems public to me.
~~~
ominous_prime
The list is publicly archived, but glibc's maintainer (Ulrich Drepper)
actively discourages public interaction for the project. The project's policy
is that bug reports should almost always go through a Linux distribution, and
to say it nicely, Drepper can be difficult to persuade.
Debian was in the process of switching to eglibc in order to avoid glibc (and
Drepper), and fix issues they saw with the library.
------
shin_lao
A couple of years ago, before SSE existed, I wrote a highly optimized memory
copy routine. It was more than just using movntq (non temporal is important to
avoid cache pollution) and the like, for large data I copied the chunks in a
local buffer less than one page size and copied it to the destination. Sounds
crazy? It actually was much faster because of page locality.
For small chunks however, nothing was faster than rep movsb which moves one
byte at the time.
------
memset
Someone tell me if I am mistaken - but it looks like the main difference
between GCC's and Intel's memcpy() boils down to gcc using `rep movsl` and icc
using `movdqa`, the latter having a shorter decode time and possibly shorter
execution time?
~~~
bdonlan
No, the problem is with x86-64, which apparently doesn't use `rep movsl`; as
far as I can tell, GCC's x86-64 backend assumes that SSE will be available,
and so only has a SSE inline memcpy. However, in the kernel SSE is not
available (as SSE registers aren't saved normally, to save time), so this is
disabled. With no non-SSE fallback (such as `rep movsl` on x86), gcc falls
back to a function call, with the performance impact this implies.
~~~
sliverstorm
From the sound of it, the function call was not the issue, so much as the
function that gets called is old and non-optimal with modern tools.
------
JoeAltmaier
I'm sad that computers in this modern age still require me to be in their
business. Doesn't it seem like the cpu's own business to move bytes
efficiently? Why is the compiler, much less the programmer, involved? The
tests being made in the compiler/lib are of factors better-known at runtime
(overlap, size, alignment) and better handled by microcode.
~~~
Andys
Hardware improvements necessarily move slower than software, especially when
carrying the complex historical baggage of out-of-order execution of x86.
To be fair, things are improving. eg. The latest Intel CPUs no longer need
aligned memory to avoid slowing down.
~~~
JoeAltmaier
Really? That's huge!
A really robust memmove library routine should handle about eleven different
factors, one of which is alignment. I don't know of ANY library that handled
that right, probably because its so hard. E.g. unaligned source, unaligned
dest with Different alignment is very hard. Usually they settle on aligning
the destination (unaligned cache writes are more expensive). The true solution
is to load the partial source, then loop loading whole aligned source words,
shifting values in multiple registers to create aligned destination words to
store.
That all requires about 16 different unrolled code loops to cover all the
cases. Nobody bothers. So nobody every got the best performance in a general
memmove anywhere. Sigh.
~~~
Andys
PCs will never be perfect. Huge compromises have had to be made all over the
hardware and software, to give us the cheap, ubiquitous computing power which
drives the Internet.
------
vz0
Anger Fog found this issue one year earlier, 2008:
<http://www.cygwin.com/ml/libc-help/2008-08/msg00007.html>
|
{
"pile_set_name": "HackerNews"
}
|
Bullet vs. Prince Rupert's Drop at 150,000 Fps - woobar
https://www.youtube.com/watch?v=24q80ReMyq0
======
googlebreak
[https://www.regonline.com/registration/checkin.aspx?EventID=...](https://www.regonline.com/registration/checkin.aspx?EventID=1933828)
|
{
"pile_set_name": "HackerNews"
}
|
Google is starting to reveal the secrets of its experimental Fuchsia OS - Tomte
https://www.theverge.com/2019/5/9/18563521/google-fuchsia-os-android-chrome-hiroshi-lockheimer-secrets-revealed
======
mimixco
I can't help but think this is another attempt to lock down people and devices
into a Google-controlled platform where they can be spied on and sold to the
highest bidder.
Does it seem suspicious to anyone that Google is pushing Fuchsia just as truly
open Android phones like Puri.sm and /e/ are coming to market?
~~~
xparco
Ugh...the conspiracy talk is baseless
~~~
coffekaesque
Android is open source but without Gapps you can't use 90% of the mainstream
apps or certain features (Google Play Services, Frameworks or whatever it's
called now). That's why Android without Google is a complete pain, and why
microG exists. We're already locked in.
|
{
"pile_set_name": "HackerNews"
}
|
Piercing The Corporate Veil - Jim_Neath
http://www.avc.com/a_vc/2010/03/piercing-the-corporate-veil.html
======
hga
" _I said last week that forming a company is the best way to "putting a
buffer between you and the business." But as Shawn and others point out in
last week's comment thread, you can't just pretend to be a business, you have
to be a business._ "
Else you might find yourself in litigation where that buffer disappears and
you are personally liable, the term of art being "Piercing The Corporate
Veil".
Don't let this happen to you.
Additional note: something that _always_ pierces the corporate veil is failure
to pay payroll taxes. No matter who's fault it is, the IRS will come after you
personally with a rusty knife. Don't try to finesse this and make absolutely
sure you trust whomever is actually doing it.
|
{
"pile_set_name": "HackerNews"
}
|
CommandQ – Never accidentally quit an app again - webdevetc
https://clickontyler.com/commandq/
======
makecheck
This type of thing can be done by setting up key bindings in System
Preferences, either per app or as a global default.
And the macOS version isn’t limited (e.g. I globally rebind Minimize because I
got tired of hitting the wrong key and seeing a window disappear).
~~~
webdevetc
Ah maybe, i don't know. But that app (command q) lets you still use CMD+Q, but
you have to hold it down for 5 seconds or so before it'll quit any app.
(it isn't my app - i just like using it and get annoyed when i use a machine
with it and i accidentally hit cmd q instead of cmd+a )
|
{
"pile_set_name": "HackerNews"
}
|
(Intro To) Map, Reduce and Other Higher Order Functions - netcraft
http://ryanguill.com/functional/higher-order-functions/2016/05/18/higher-order-functions.html
======
russellbeattie
That's the first article I've ever read that cleanly explained reduce - not
just what it does, but why and how to use it.
~~~
netcraft
as the author, I appreciate you saying so!
|
{
"pile_set_name": "HackerNews"
}
|
Amazon sellers say that the company is losing millions to scammers - exolymph
https://www.inc.com/sonya-mann/amazon-fraud-scam-sellers.html
======
johnsocs
Over the last year or two I have noticed that a bit more care and scrutiny
needs to be put in to a seller and product review before I make my one click
purchase. Overall I do think they would get more out of me as a consumer if
the market place was a bit more locked down. In some ways it's starting to
look like ebay or aliexpress
~~~
bnolsen
I only got hit by that recently. A few of items in my "Save for later" list
dropped price one day early this month. I purchased them on a whim one day
without thinking too hard...one item was successfully shipped to a totally
different state. Another 2 were listed with 3-4 day shipping estimates but
were given chinese tracking numbers many days after they were supposed to
arrive with no status updates or seller communication. One just got an update
2 weeks after initial delivery date...the one amazon put a pay hold on. I'm
guessing best case its a wrong/counterfeit item. The other 2 have alreay been
refunded by amazon.
------
SomeStupidPoint
Bezos can say whatever he wants about eternal 'Day 1', FBA and FMA are 'Day 2'
features to the core: they're fragile, anti-customer features that only serve
Amazon's logistical needs at the expense of customers. If anything, Amazon
falls victim to data-as-proxy (which he also decries) in thinking these
features are good. The Bezos letter to shareholders is a good illustration of
what Amazon _should_ be doing, not what they are, and why they're currently
struggling as a business.
When Amazon puts their operations and money where their mouth is, I might
believe them.
------
garethsprice
I purchased a newly released book the other day that was from a seller and
listed way below market rate, order was cancelled and refunded within a couple
of hours. Noted the user had 2-3 of pieces of feedback from years ago, but
appeared to have thousands of new books listed at crazy low prices that also
presumably did not exist. Any idea what the scam is there?
Amazon Marketplace is a cesspool, definitely affects my confidence in the
brand. I try and order items directly from Amazon now, but have heard that
even those supply chains can contain a lot of co-mingled counterfeits.
~~~
exolymph
In that case it was maybe canceled and refunded because Amazon caught the
scammer?
~~~
garethsprice
Nope they're still up, with 628 pages(!) of products that appear to be marked
as "Currently Unavailable". Some sort of arbitrage or marketing bot? There's a
lot of weirdness going on with Amazon Marketplace sellers, that's for sure.
------
dqv
I always wonder if things like this are caused by corporate espionage. Walmart
comes to mind as someone who would benefit from the "reverse PR" against
Amazon. Apparently Walmart has been in China since 1996. I wonder if they
could influence a distributor to poison an Amazon supply chain.
------
cyber
I wouldn't be surprised to if this was largely accounted for in supply side
contamination issues. It appears that Amazon treats all SKUs identically,
sourcing the closest one to fulfill and order, regardless of how that SKU
arrived at Amazon.
A concrete example: No Starch Press customers are receiving counterfeit books
when ordered from No Starch Press' Amazon store.
Even if this scam is caught, it's still cost Amazon money in dealing with the
issue. (And cost legitimate vendors no end of frustration with legitimate
customers receiving fake books.)
~~~
exolymph
I actually wrote about No Starch's ordeal as well :)
[https://www.inc.com/sonya-mann/amazon-counterfeits-no-
starch...](https://www.inc.com/sonya-mann/amazon-counterfeits-no-starch.html)
FBA commingling is a whole ball of wax on its own.
------
tyingq
_" Amazon has zero tolerance for fraud"_...from sellers.
Buyers, feel free to unbox your purchase, and return a potato for a full
refund, provided the product was sold by one of our 3rd party merchants.
|
{
"pile_set_name": "HackerNews"
}
|
Three reasons I avoid anonymous JavaScript functions like the plague - sanderson1
https://hackernoon.com/three-reasons-i-avoid-anonymous-js-functions-like-the-plague-7f985c27a006#.42sstf8pw
======
Cozumel
It's a bad workman who blames his tools.
------
labrador
I like to write my code so it is easy to debug and easy to reuse so I like
that these ideas are reinforced here.
|
{
"pile_set_name": "HackerNews"
}
|
How do i choose architecture for my scraping web app? - tnsaturday
I have written a bunch of simple Python scripts that parse and scrape information on the web and store it in JSON files, so that I can access that data and work with it. Now I want to build a web application on top of my project. Basic use case would be:<p><pre><code> User inputs his search request in html form.
User gets response without reloading the page.
</code></pre>
Seems fairly simple, however a couple of questions arises when it comes to choosing a technology, or technology stack:<p>1. SQL vs JSON.<p>I store my data in a JSON files now which seems to work quite nice: I have an array with 700-800 objects of about 5-6 key-value pairs of unicode strings and urls:<p>My old laptop search through that data blazingly fast, but what about the web? What will happen, when multiple users will try to access the same file at the same time? The question is, how much slower is reading and searching in JSON file as opposed to SQL database?<p>2. Python in PHP way.<p>I use Python as a general-purpose language, I write backend in PHP. This time not only do I want to write parsing/crawling part of this app in Python, but I want to create and serve web pages using Python.<p>However, it finds out that Python gets quite complex when it comes to developing a web app. You have to use a full-fledged web framework such as Django which seems as quite an overkill to me, especially in my case when I do not need to worry about storing user's data, no sign up or sign in is required, no email checking. So, second question is, can I form and serve valid html documents with Python using inline tags as with Apache + PHP stack? like the following:<p>If not, can it be done significantly easier than in Django using all that MVC (also they call it MTV and I haven't figured out why yet) stuff?<p>3. AJAX<p>Third part comes right about serving that content in search result container without reloading page. How do i access local JSON file with jQuery or should I query my server with a XHR?
======
gusmd
I've recently been in a similar situation. After some research, I decided to
go with Flask. It is so easy to get started, you will drop that _python is
complex for a web app_ opinion very quickly.
Now, over the last couple of weeks, I've migrated my app to Quart [0] due to
asyncio support. It is pretty much a drop-in replacement of Flask, and has
great support for websockets, which I use extensively. You could use that (or
the also supported server-side events), in conjunction with JS, to update your
webpage.
[0] [https://gitlab.com/pgjones/quart](https://gitlab.com/pgjones/quart)
------
is_true
What kind of data? That's key IMO.
If Django is too much for your needs maybe you could start with Flask.
|
{
"pile_set_name": "HackerNews"
}
|
Satya Nadella: DREAMers make our country and communities stronger - coloneltcb
https://www.linkedin.com/pulse/dreamers-make-our-country-communities-stronger-satya-nadella
======
e9
This really frustrates me. Children should not be punished for crimes of
parents but they should also not benefit from them. As legal immigrant that
went through tons of trouble to get my green card after 8 years I feel
insulted when people praise illegal immigrants. If you want open borders then
change the law, until then its illegal. To me this is similar to parents
robbing a bank and giving children 100K and government is saying "oh, don't
punish children for parent's crimes so go ahead and keep those 100K, it's ok".
They should be deported and blame their parents for it, not the government.
~~~
drewrv
The thing about the immigration debate in this country is that neither side
can agree on the severity of entering our country without papers. You just
compared it to robbing a bank. I think of it more like jaywalking.
The thought that a child would be taken from their home and put on a one way
plane to a place they've never been before because their parents are guilty of
jaywalking seems dystopian.
~~~
amagaeru
So how do you feel about trespassing?
------
tristram_shandy
>As a CEO, I see each day the direct contributions that talented employees
from around the world bring to our company, our customers and to the broader
economy. We care deeply about the DREAMers who work at Microsoft and fully
support them. We will always stand for diversity and economic opportunity for
everyone.
Diversity doesn't require illegal immigration, nor are the children of illegal
immigrants particularly beneficial to their host country -- if you have any
faith in our system (that should be designed to select for the best and
brightest), you'd have to admit that the immigrants admitted by a real merit-
based immigration system would be superior to immigrants admitted more or less
at random. The only traits that illegal immigration selects for are
desperation and a willingness to break laws and live on the fringes of
society.
Not that I believe Satya Nadella is sincere in any case, this is just evidence
of the growing politicization of large tech companies (as the anti-trust suits
loom) -- large corporations don't support liberal causes out of genuine
concern (corporations are by default, sociopathic) -- they're trying to do
political astroturfing: the goal is to prolong the continued existence of
their monopolies. If the issue of concentration of power in the tech giants
were to be handled solely (and dispassionately) by the appropriate financial
regulatory bodies, it would be over very quickly. The large tech giants plan
to avoid that by making their existence an ongoing issue in the cultural cold
war, split the popular opinion down the middle, and make the issue too
political to ever be resolved.
~~~
muddi900
I think the suggestion is that if the immigrants were Caucasian, this would
not be a big deal. Which is true; most people of Irish, Scottish and German
descent came here when the standards of entry were fad less rigorous. If lax
immigration policy selects for illegal behavior, then most Caucasian should be
sequestered into camps, should they make us the victims of their illegal
behavior.
~~~
nostrademons
Irish immigrants during the 1840-1900 period were not initially considered
"white", nor were Italians & Spanish during their period of peak immigration
(1890-1920 & 1830-1860, respectively). You see this with NINA (No Irish Need
Apply) signs around the turn of the century, and with old WW2 movies where the
Italian and Spanish characters are often still called "wops" or "spics". There
are some fascinating books and articles on this, eg.
[https://books.google.ie/books/about/How_the_Irish_became_whi...](https://books.google.ie/books/about/How_the_Irish_became_white.html?id=w7ztAAAAMAAJ&redir_esc=y)
The history of immigration and of "whiteness" in the U.S. is a fascinating
study in cognitive biases. If you look at groups that we consider fully
American now (eg. Irish-Americans) vs. how we considered them when they first
immigrated, it's night-and-day (except for African-Americans people, who were
shat upon when they were first brought over and are still shat upon now). It's
clear, historically, that we were mistaken in the past, and yet _people still
make the same mistake_ , probably because it is evolutionarily useful to
consider yourself superior to other people and socially useful to do so in
groups. Indeed, even in the most PC, liberal, progressive, colorblind,
diversity-affirming circles, the same dynamic still plays out, except that the
"other" in those cases is rural dwellers, or people who didn't graduate from
college, or folks who live in the South.
|
{
"pile_set_name": "HackerNews"
}
|
Pro-China Astroturfers - ulysses
http://arstechnica.com/tech-policy/news/2010/03/280000-pro-china-astroturfers-are-running-amok-online.ars
======
jamesbressi
This shouldn't be a surprise to anyone, no, not the fact that it is China, but
the fact that it is happening. Just a digital form of propaganda and digital
rogue social influential propaganda.
But, I did find this part entertaining in the beginning "imagine how much
worse it would be if the US government employed a couple hundred thousand
people to "shape the debate" among online political forums. Crazy, right? What
government would ever attempt it?"
Arguably, the US does. Maybe not the US "government", but the employees of it
(politicians, the President's cabinet, etc.), those looking to become
employees of it, lobbyists and so on...
An oversight of the last decade? e.g.'s: President's Cabinet - "Weapons of
Mass Destruction", Healthcare Reform Bill - Both sides of the debate guilty,
The Media - let's not even go there, And what was that fake grassroots
incident on FaceBook that was exposed last year or earlier this year?
Well, you can go ahead and add a million other examples.
~~~
megaduck
As somebody who has lived in both countries, I don't think this argument holds
any water. While the U.S. government does indeed engage in propaganda, it's
like comparing a candle to a forest fire.
The Chinese government controls every news organization in China. Every single
one. All television is state-run, and shows hours of pure propaganda every
day, as well as entertainment programs that push certain agendas. Every book
published requires government approval, and many are written for the sole
purpose of propaganda. Opposition stances are strictly prohibited, across the
board. Virtually every street on every town has large red banners and posters
pushing the 'message of the day'.
This digital form of propaganda is no different. Yes, the United States
engages in abuses. However, they're not even close in scale, and therefore
qualitatively quite different.
Oh, and if anyone wants to see what this digital astroturfing looks like in
the wild, I had some of it show up on my personal blog a while back:
<http://www.varvel.net/david/?p=9#comment-19>
~~~
grandalf
All this means is that the US has managed to achieve a propaganda state
without the explicit threat of force...
Do you recall when GWB took office and long time members of the press core
were kicked out if they asked tough questions?
This happens all the time which is why you don't find certain stories/angles
covered in the mainstream press.
Consider that Tim Russert was considered the most hard nosed journalist
because he had the gumption to ask a tough question or two... usually all he
did was quote a few things the person had said in the past and ask them to
explain their current claims (which usually turned their faces quite red). If
this is hard nosed journalism we're all in deep trouble.
Most news today is full of stories that make Americans feel morally
superior... stories about how women are mistreated in all sorts of other
countries... how politicians are corrupt, and how the economic opportunity
stinks (elsewhere).
There is also tremendous deference paid to titles and institutions in the US
that should not be -- and would not be paid to titles and institutions of
another country.
Why, for example, should we take the term "Chairman of the Federal Reserve"
seriously, but take some other country's "Minister of Economic Affairs" less
seriously?
All this is part of a media whose purpose is to help Americans feel that they
have the moral high ground so that wars can be waged whenever necessary. The
media doesn't have to overtly support a war -- in fact it is expected to
question it (but by asking the wrong questions).
There are a _lot_ of things you decide not to say/think if you expect to be on
a national TV show, quoted in a national paper, be appointed to a cabinet
post, etc. Why the voluntary censorship? Because all that stuff is such a
downer. Why worry about it when we can just order another burger and get free
healthcare and feel good about our google searches because google is taking a
stand against horrible china. Why would anyone fight these memes when they
make everyone so happy?
In the US the entrenched interests have been so successful, in fact, that we
see all of this as perfectly normal and reasonable.... and, of course, we
amplify differences with another country (like China) out of moral superiority
and righteous indignation... Nothing makes us feel better than feeling sorrry
for some poor victim of a propagandizing state, after all.
This isn't a conspiracy (or a conspiracy theory) it's just what you get when
entrenched interests flourish in a stable, prosperous country.
~~~
cturner
Do you recall when GWB took office and long time
members of the press core were kicked out if they
asked tough questions?
Helen Thomas got the cold shoulder for a while, but GWB started asking her
questions again to try and gain credibility. The scale is incomparable, which
was the point of the parent.
All this is part of a media whose purpose is to
help Americans feel that they have the moral high
ground so that wars can be waged whenever necessary.
If you really meant what you said here, it would indeed be a conspiracy
theory, despite your insistence that it's not.
You would need a lot more evidence than you're supplying to support the sort
of assertions that you're making.
What the US government does isn't even particularly relevant to the article.
~~~
grandalf
My post was not meant as proof, it was just intended to trigger a thought
process in the reader.
We make a lot of arbitrary distinctions about the legitimacy of institutions.
When we see the leader of an Afghan city state traveling around in an SUV with
a bunch of guys with machine guns, we call him a "warlord", yet we offer
utmost deference to the US presidential motorcade.
Look at it this way, society has winners and losers. Winners generally want to
continue being winners, so they end up in control of coercive force (guns,
military, etc.) and they end up with the tools of propaganda at their disposal
(newspapers, puppet officials).
What differentiates one nation's winners from those of another is far less
than we tend to think, since much of our "consent" to the status quo is built
upon our belief in certain doctrines and institutions.
Surely our democracy is worth something, and many of our institutions are
worthy of some deference and respect, but so are China's and Mexico's and
Iran's...
There are a variety of other fallacies which add to our distorted view of the
rest of the world, which I could also go into detail about.
------
nsoonhui
When I read this article, I thought for a moment about the China apologists,
and those who repeatedly jumped to China's defense whenever the CCP was casted
in unfavorable lights on HN. And I laughed.
~~~
garply
I don't think HN is the target demographic for the propaganda - the targeted
demographic, I believe, would be more blue collar (if such a category can even
accurately describe the social segmentation in China). I suspect most of the
astroturfers don't even speak English.
~~~
Retric
Most young well educated people in China read and write at least some English.
~~~
garply
I'm well aware of this. I do not believe the propaganda is aimed at them.
Violent revolution is not going to come from that demographic. What the
government is worried about is the less-educated, general population having
unfiltered access to certain 'unpleasant' pieces of information. The gov't is
concerned about uprisings like what has been going on in Xinjiang. It is not
concerned so much about some Chinese ibanker learning that a bunch of people
died in Tiananmen Square in 89. The ibanker already knows that.
~~~
Retric
Tiananmen Square was a direct result of young well educated people taking a
stand. Educated people are behind a lot of the world revolutions and I think
the non violent revolution is probably more dangerous to China to the
interests of Chinese leadership than a purely military coup.
Anyway, my point is the lack of a language barrier means a site like HN is as
dangerous to the Chinese government as better regulated site within their
borders. While the government has significant military power, you can’t keep a
billion people in check without some control over how they think. So even with
the Great Firewall of China popular world opinion is still important because
it shapes how people inside China think.
PS: Also you can easily come up with a post that is corrosive to the Chinese
government which does not touch any of the current third rail issues in such a
way that automated software could detect what is going on. EX: An academic
discussion of the economic costs of export oriented monetary policy where
china’s is not even mentioned.
------
maxklein
Great. Now anybody who takes a non-comformist line in the china debate is
going to be accused of being paid by the Chinese government. Great way to go
with making sure there is only one opinion on this issue.
~~~
DanielBMarkham
I'm with you Max.
Unfortunately the Chinese didn't seem to think that honest opinion differences
would result in the outcomes they wanted.
Now anything pro-China is tainted. And rightly so, unfortunately.
~~~
raganwald
> Now anything pro-China is tainted.
Why? An argument is an argument is an argument. If it holds water, let it
carry the water. Saying that a pro-China argument is tainted just because
there is a possibility that the person making the argument is paid to make the
argument is fallacious, it's a kind of reverse Appeal to Authority.
Of course, there are certain arguments that are tainted. For example,if
someone says "I counted 280,000 signatures on this Kick Google Out of China
petition," that argument is suspect. As is the "My friends and everyone I know
see no value in Free Speech" anecdotal argument.
And I'm sure there are some others that are subject to scrutiny. But if
someone makes a good argument backed up with reasonable premises, it could
come straight from their Premier for all I care.
~~~
celoyd
> An argument is an argument is an argument. If it holds water, let it carry
> the water.
The thing is, until now, most of the arguments I trusted most about China were
anecdotal. Not because I distrust numbers, but because numbers about China are
so untrustworthy. If someone reputable in a forum like this one plausibly
presents themselves as an old China hand and explains conclusions they’ve come
to from a wide range of experience there — about what average citizens think
of the web censorship, etc. — I’ve tended to assume that it’s as close to the
truth as I’m likely to get.
So sure, this only taints anecdotal evidence about China. But for a country
where nearly all non-anecdotal evidence about important things is already
tainted by the government, that’s a big blow.
(Some acquaintances have traveled in China and explained things about life
there that really raised my opinion. Now I’m a little scared to bring them up
— despite being generally pretty darn wary of the Chinese government — for
fear of looking like an astroturfer.)
------
chrischen
Grass mud horse is a homonym for "fuck your mom" i believe.
Anyways I don't think swearing in Chinese is that big of a deal. _Everyone_ I
knew as a kid said it and I said it to my parents all the time... Or maybe we
were obscene.
------
mahmud
They are known as the 50-cent party:
<http://en.wikipedia.org/wiki/50_Cent_Party>
------
DanielBMarkham
_China, which allegedly employs 280,000 people to troll the Internet and make
the government look good._
_...Many more people do similar work as volunteers—recruited from among the
ranks of retired officials as well as college students in the Communist Youth
League who aspire to become Party members..._
That's truly a staggering number.
Key up "but everybody does it" posts...
~~~
jongraehl
queue up?
------
c1sc0
The more important questions for hackers would be: are there ways to
automtically detect astro-talk much like spam-mail?
|
{
"pile_set_name": "HackerNews"
}
|
Roger McNamee: Microsoft and Google are dead, HTML5 is the future - dstein
http://fora.tv/2011/06/28/Elevation_Partners_Director_and_Co-Founder_Roger_McNamee
======
MaysonL
A great interview with him: [http://tech.fortune.cnn.com/2011/04/14/roger-
mcnamee-loves-t...](http://tech.fortune.cnn.com/2011/04/14/roger-mcnamee-
loves-the-ipad/)
|
{
"pile_set_name": "HackerNews"
}
|
Detecting Spam on Twitter - ajju
http://aarjav.org/wordpress/?p=59
======
ajju
@semanticvoid pointed out that one of the problems with using @spam to collect
spam for research is that some classes of spammers may be underreported. What
is there was an account that just copy tweets from famous accounts or a feed
and inserted malware links once a day?
Any ideas on how to improve my data/experiment?
|
{
"pile_set_name": "HackerNews"
}
|
Show HN: All my GitHub Issues on one page - effhaa
http://my-issu.es/
======
effhaa
A small project from last weekend. Maybe it is useful for somebody else. :)
|
{
"pile_set_name": "HackerNews"
}
|
Pedantic Hacker News - tolitius
http://pedantichn.tumblr.com/
======
enthdegree
I can usually grasp the general idea of content on pages with invalid markup
but I just couldn't understand this one at all. `Stray doctype?' Come on
now...
|
{
"pile_set_name": "HackerNews"
}
|
Lyme Disease Cases Are Exploding - sb057
https://elemental.medium.com/lyme-disease-cases-are-exploding-and-its-only-going-to-get-worse-5d3c3a2de5c5
======
robolange
My father nearly died of [what we believe to be] Lyme disease. A few years ago
he started getting awful bouts of pain all over his body. He couldn't eat, was
losing weight, was losing muscle mass fast. It got the the point where he was
using a walker (he was less than 60 years old at the time) and could barely
lift a 5 pound weight. He went to a bunch of different doctors, had all kinds
of scans run, and only kept getting worse. At one point my mother noticed that
every test any doctor gave for Lyme disease came back "inconclusive". She
pointed out to one of the doctors, who said that it didn't mean anything as
apparently the test is not very reliable. Finally, they found a doctor who
prescribed medication to treat Lyme disease without a definitive test because
apparently there was a low risk of side effects. Within a couple of weeks of
starting the medication, my father was visibly better, within a few months he
was basically back to normal, and is basically fully recovered now. (Sorry
folks, I don't know any of the specific drugs or medical terminology.)
We have no idea how or when he got infected. As far as he knows, he never had
a bullseye rash, although apparently that doesn't always happen. Now I slather
myself in DEET whenever I go into a an even semi-woodsy environment.
~~~
Assossa
I would greatly appreciate it if you could find the name of that medication.
My sister has Lyme Disease and is still suffering health effects from it
despite various treatments.
~~~
spacebatsghost
Most non-Lyme Literate Medical doctors prescribe Doxycyclin. If you are seeing
a Lyme Literate Medical doctor they'll put you on a cocktail (often
Clarithromycin, Doxy, Alinia (anti-parasitic), and a few others. This also
depends if you have co-infections or not.
~~~
twic
The UK NICE guidelines suggest doxycycline, and have a section discussing the
evidence for that:
[https://www.nice.org.uk/guidance/ng95](https://www.nice.org.uk/guidance/ng95)
"Lyme literate" seems to be a pseudoscience term related to the ME-like
crypto-syndrome "chronic Lyme disease" \- for example, see towards the end
here:
[https://sciencebasedmedicine.org/legislative-
alchemy-2014-so...](https://sciencebasedmedicine.org/legislative-
alchemy-2014-so-far/)
------
erentz
Unfortunately there’s no way to know that one _currently has_ Lyme disease
unless the symptoms are close in time to a tick bite.
Treatment is with antibiotics, usually doctors will use a combination of two
types of antibiotics (eg doxycycline and cefdinir) for which there is some
research suggesting better results. In rare cases IV antibiotics may be
required, especially if evidence of cardiac or neurological involvement.
Duration of treatment is for 1 to 4 months. There’s next to no evidence you
should go longer than that.
After treatment patients may also need immune modulation treatment due to
autoimmune reaction to the infection. High dose ivig in serious cases. Some
patients also find benefit using low dose naltrexone for this though it can
have a very nasty side effect of causing depression so watch yourself using
it.
Almost everything else is a scam. And unfortunately with Lyme, while the
patients are clearly very sick, there are an aweful lot of quacks ready to
part them with their cash so be weary.
It’s important to know there are other illnesses that people need to rule out
too, check for dysautonomia, POTS, ME/CFS, MS, and aaSFPN for example.
~~~
hnzix
Any advice on POTS treatment avenues? There seems to be poor GP knowledge
around the condition, apart from giving fluids via a port to reduce symptoms.
~~~
erentz
I’m mostly familiar with POTS in patients with other chronic symptoms such as
fatigue. I’ve been on high dose IVIG and it’s been helping my POTS a lot.
Take a look at a video on YouTube by a Harvard doctor Anna Louise Oaklander
called Small Fibers Big Problem about the association they are making with
something they label aaSFPN. Also take a look at some results for adrenergic
and muscinaric auto antibodies, which are showing up in patients with POTS and
other dysautonomias.
~~~
hnzix
Thank you very much for providing this information, those are some great leads
I'll follow up.
------
stupidboy
Surprised to see Lyme disease on HN, since bringing affordable DNA testing for
B. burgdorferi & other pathogens to the public is the startup I've been
working on since 2014.
Relevant Plug: [https://www.tickcheck.com/](https://www.tickcheck.com/)
As mentioned in other comments, serological tests fall short in various ways
(accuracy, time). If you keep the tick that bit you, we can test it for the
presence Lyme, and several other pathogens. If negative, we can effectively
rule out much of the risk. Super quick & accurate, too.
~~~
dheera
Interesting. I got bit by 2 blacklegged ticks in the past 3 months but I
couldn't find any free tick tests, so I never had them tested. Both were
within a few hours so I assumed based on CDC advice that I was still in the
green zone. It sounds like one of those things that health insurance companies
would be out of their minds to not cover.
Separately: Bay Area hikers beware -- change your clothes immediately after
getting home, do complete body checks after hiking -- ideally, shower
immediately.
~~~
Alex3917
> Both were within a few hours so I assumed based on CDC advice that I was
> still in the green zone.
I wouldn't trust the CDC data. Notice that there is no primary source on their
webpage.
Here is a paper by someone who took an independent look into it:
[https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4278789/](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4278789/)
------
nouveaux
I've recently gotten into backpacking and learned that the most effective way
of dealing with mosquitoes and ticks in nature is Permetherin.
[https://en.wikipedia.org/wiki/Permethrin](https://en.wikipedia.org/wiki/Permethrin).
It is intended for clothing and not the skin. The best way to apply it is to
soak your clothes and let it dry. Spray on applicators are also available.
Once it is dry, it is very stable and safe. It does breakdown with UV and
washes, so it will need more applications. It is also available for sale and
the pre-treated clothing does last through more washes.
Note: Permetherin is harmful to cats when it is wet so if you have cats at
home, please read up more on it. Once it is dry, it is safe around cats.
~~~
pmoriarty
Ticks can drop out of trees on your head, or attach to your hair when your
head brushes past some leaves on a tree. Also, if your bare skin is not
covered in insect repellent, that could also be a route that they get to the
rest of your body.
The most effective method of not getting bitten is not to go in to the woods
at all.
~~~
perfmode
> The most effective method of not getting bitten is not to go in to the woods
> at all.
your profile reads:
He who lives without folly is not as wise as he thinks. \-- Rouchefoucald
~~~
abledon
A funny saying I've heard is that "The difference between a smart person and a
stupid one is the smart person knows he is full of stupidity".
------
llamataboot
I may or may not have had chronic lyme disease. I definitely had many many
ticks. (Never cared about em, just pulled them off). And I definitely was sick
for over a decade. And I definitely tested positive on the Western Blot
multiple times. (And chronic lyme is definitely a bit of a catch-all diagnosis
for a host of auto-immune stuff we don't quite understand yet)
It was awful, something I wouldn't wish on anyone.
If you do get a deer tick (they are the small ones, not the giant gross ones)
save it, there are a number of places you can send it for peace of mind that
it was not infected.
Also consider brief antibiotic prophylaxis: 1-3 doses of doxycycline are often
recommended. I'm not in favor of anti-biotic overuse, but if lyme can turn
into chronic lyme and that was indeed what I dealt with for 10 years of pain
and crushing fatigue, then you want to avoid it.
(edit: typos)
~~~
cogman10
Just a note, Chronic lyme disease isn't really a thing.
[https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4477530/](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4477530/)
This isn't to say you haven't been dealing with a lot of garbage, just that
you might need to seek different medical care if your doctors are telling you
that you have chronic lyme disease.
~~~
llamataboot
I appreciate your concern. I certainly had something. It certainly sucked. I
certainly shared a lot of symptoms and remission factors with other people who
identified as having chronic lyme. I certainly had Lyme at some point in my
past.
I agree that "alternative medicine" can be a hotbed of pseudo-science and
sketchy treatment, but I also think that in terms of a lot of these chronic
auto-immune related conditions, our evidence base is still pretty small and
there's a lot we don't know. And the dominant Western medical system tends to
do much better with acute conditions with a clear etiology of cause/effect
than nebulous clusters of symptoms (see also, all of mental health).
My hunch is that chronic lyme, just like for example chronic fatigue that
lasts for years after Epstein-Barr, is some sort of auto-immune condition
triggered by the initial infection, even if the initial infection is gone.
With Lyme it is a little more complicated because there are spirochetes
involved.
I assure you I've probably read everything that the NIH has put out about Lyme
all the way to some of the wackiest all caps blinking sketchy sell-me-vitamin
treatment websites out there, as well as chronic fatigue (I spent the first 7
of those years considering it CFIDS). It nearly destroyed all of my 20s.
Treat prophylatically or not, but try not to get Lyme, that's all I'm saying.
~~~
rayalez
Did you get better? What treatments did you use, what did you do?
~~~
llamataboot
I got better. I also struggled with severe depressive tendencies, panic
attacks, social anxiety, and obsessive-compulsive tendencies. I got better
from those as well.
All these things were linked, yet separate. You cannot tell me the chronic
fatigue was just my depression, as I can tell a difference, but neither was my
depression just from not being able to be active for more than about 2-3 hours
each day. Mental health issues pre-dated chronic fatigue, but chronic fatigue
intensified them.
I did many treatments, though I chose to never have a PICC line installed for
months of IV antibiotics - that seemed like chemotherapy to me then and now,
and not without its own risks. Everything from antibiotic regimens to
homeopathy to humming with crystals in my hands. Sometimes I was raw vegan,
other times I smoked heavily, drank heavily, because nothing worked anyway,
why not have fun, etc.
Too long for one comment here, but find me privately if you are struggling
with chronic immune stuff.
I'm unsure what eventually worked, and what was time. A lot of it, cliche as
it is, was the very strong love of a very supportive partner that saw I could
still live, even if I thought I couldn't.
I would say the following things all had major effects, though I don't use any
currently - many of them provide symptomatic relief for some things, I'm not
sure why things shifted underneath it all.
#1) regular injections of methyl-B12 even with normal cyano-b12 levels
#2) low dose Abilify and modafinil and (sparingly) stimulants
#4) regular yoga practice that provided low-impact exercise, mindfulness
amidst the fear that my life was over, and a way back into an awareness of my
body that didn't only have me think it was the enemy that was killing me, and
of a self that transcended whatever I thought I was
#5) treatment for orthostatic hypotension including low dose steroids, salt
pills, compression stockings, etc
#6) psychotherapy
#7) A few times rounds of abx when I was at my worst seemed to help a lot
#8) supplemental testosterone for 2 years when mine was low-normal (is normal-
normal now without supplements)
#9) eliminating all processed anything from my diet for about 4 years, all
gluten for about 5, all refined sugars for about 5 (can eat anything now
without ill-effect) and drinking home made bone broth regularly
(I also know some people will read this and hone in on a few things and be
like oh! he just needed some psych drugs, therapy, and exercise! Those can def
all be helpful things, but I assure you it was a strange and complicated
journey through the mindbody, I still have no sample size other than me, and
the methyl-B12 was by far the most helpful even though that largely falls
under the pseudo-science perpetuated by people searching for autism cures...)
------
cmrdporcupine
Explosion of deer and rodent populations is definitely not helping. We were
walking with our dog at a local natural area we frequent. A deer crossed the
path and we all stopped to take photos, and then moved on. That day we found
ticks all over our dog. The deer are literally swimming in them.
Both rodents and deer are part of the lifecycle of the tick. Suppressing deer
populations and encouraging red foxes and other rodent predators would have to
help.
~~~
rb808
Most people still see deer as cute, but its starting to change. When I see
deer I just see disease carrying vermin. Deer populations have exploded. I
wish they were systematically culled. Time talked about this a while back, I
havent seen anything since. [https://time.com/709/americas-pest-problem-its-
time-to-cull-...](https://time.com/709/americas-pest-problem-its-time-to-cull-
the-herd/)
[https://vet.uga.edu/population_health_files/scwds-150wtdeer5...](https://vet.uga.edu/population_health_files/scwds-150wtdeer507080-2012.jpg)
~~~
gerbilly
I know what you mean.
However we might not be awash in deer if we hadn't first 'culled' all the
predators. Wolves for example.¹
1:
[http://www.uky.edu/OtherOrgs/AppalFor/Readings/leopold.pdf](http://www.uky.edu/OtherOrgs/AppalFor/Readings/leopold.pdf)
~~~
Tharkun
The wolf population is slowly increasing in western europe.
Here's some wolf news for those who enjoy that sort of thing, in Dutch:
[http://www.welkomwolf.be/node/238](http://www.welkomwolf.be/node/238)
~~~
rb808
I'd love wolves roaming around my local forest, but my neighbors with pets and
local farmers wouldn't let it happen.
------
lkrubner
I was bit by a tick and I got sick. No doctor could figure out what was wrong.
I tested negative for Lyme. I took antibiotics (Biaxin) and 3 months later I
was fine. I stopped taking antibiotics. Within 2 months I was sick again. I
took Biaxin for 6 months. I felt great. I stopped antibiotics. Within 2 months
I was sick again. I took Biaxin for a year. I felt great. I stopped taking
antibiotics. Within 2 months I was sick again.
This time, the symptoms were completely different. I assumed it was a new
illness. Doctors were mystified. One said it was psychosomatic. I was off
antibiotics for more than 6 months. I got sicker and sicker. I was in bad
shape when I decided this illness was the same as the previous illness. I took
a combination of Cipro and Zithromax for 1 year. I got better. Then I stopped.
I got sick again.
I was eager to get back to my career so I tried a few antibiotics that might
be light, cheap, easy and sustainable. A friend of mine, as a teenager, had
acne and the doctors had them on antibiotics for 4 years to deal with the
acne. They’d taken something similar to doxycycline. I tried it but it did
nothing for me. I tried hyperbaric oxygen therapy, which gave me great energy
but did not cure me. Then I tried Amoxicillin. That worked great. I took that
for 11 years. Then I quit and got sick again. I went back to Biaxin, and took
that for another 2 years. My business was going well so I could not focus on
my health. I was busy. The simplest thing was to take antibiotics, which kept
me healthy so I could focus on work. So long as I took antibiotics the illness
kept its distance. In that sense, the illness was similar to leprosy — I could
live a normal life but only if I took antibiotics every day.
But I was irritated with my business partner (I’ve written about that
elsewhere). So I sold my share of the business.
Now I had time to focus on my health. What had I not yet tried? What about
fasting?
I went 2 weeks without food. I took antibiotics the first week but not the
second. I felt sick. The fast ended. I had a vegetarian meal. I fell asleep. I
woke up the next day and had a large vegetarian meal. That night I felt funny.
The feeling was similar to that moment, if you have a flu, when the fever
breaks. With a flu, you get sicker and sicker till a moment the fever breaks
and then you know that your immune system has kicked in. That was exactly the
feeling that I had then.
I have not taken any antibiotics since that fast, and I’ve been blessed with
excellent health.
~~~
Galaxity
Now I'm no medical expert but antibiotics resolve an infection after a certain
amount of time. Even for Lyme and leprosy. There may be health after-effects
but those would not be resolved by antibiotics since the infection is gone.
What could possibly outlast 20 years of continuous antibiotics that isn't
either completely killed or mutates and renders the antibiotics ineffective?
And then be resolved with a fast?
~~~
lkrubner
Teenagers with acne often take antibiotics for many years. I had one friend
with severe acne who took antibiotics well into adulthood.
Those teenagers, who take antibiotics, with a microscope at home can do this
simple experiment: prick your finger with a needle and put a drop of blood on
steriled glass, then put that under a microscope. Do you see bacteria? Yes, of
course you do. Even if you take antibiotics for 20 years, your body will still
be seething with bacteria. There is no way to get rid of the bacteria on your
body. That isn’t what antibiotics do. Antibiotics work with your immune system
to bring bacterial load back to a reasonable level. If antibiotics killed all
bacteria then people with compromised immune systems could be kept safe from
bacteria with antibiotics. But there isn’t a doctor in the world who thinks
that’s possible.
~~~
mrfusion
I’m under the impression that blood is very sterile. That’s the point of your
immune system.
~~~
lkrubner
" _that blood is very sterile_ "
Please, if you can find a microscope, go look. Use your eyes. Your blood
system is absolutely not sterile, that is a wildly non-scientific thing to
say. Your immune system can not possibly go after ever protein it meets,
otherwise you would be allergic to all food, and you would die. When you see
an article such as "The dormant blood microbiome in chronic, inflammatory
diseases" ask yourself, what is a dormant blood microbiome?
[https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4487407/](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4487407/)
------
chris72205
Even if there's only one mention of it in the article, I'm glad they included
it: alpha-gal syndrome. Not a ton of people seem to know about it and yet I
continue to meet more and more people that are affected by it.
For those unaware, it causes a person to develop an allergy to red meat and
there's currently no treatment for it other than to wait. Granted, it's not as
serious as Lyme Disease, I hope it gets to be well known so there's increased
chances of finding a cure for it.
~~~
_sword
My friend's brother was bitten by a lone star tick on Long Island and since
has developed that syndrome. It's been pretty funny trying to figure out what
we can cook for him at our BBQ's since apparently the meats he's able to eat
now are essentially chicken, fish, duck, and human.
------
thorwasdfasdf
I read about this issue in a runners World article. Most people assume that
the tics jump from trees down onto people. But that's not how they said it
goes.
They said that the tics which spread Lyme disease get spread onto runners from
Tall Grass that people run through as you make contact with the grass they get
onto you.
~~~
w8rbt
I had a friend who was in the Marines and he told me once that they wore panty
hose when training in areas with ticks. He said the only downside was that
they can be hot in the summer time.
~~~
utexaspunk
...but is that the _only_ downside?
------
wazoox
I've got ticks a few times a year just running in the woods (the last one was
Saturday). During holidays in the Pyrenees with my family, we had to scan our
bodies for ticks every day, and every day we found several -- up to a dozen --
on each of us.
Now the article is very light on the main question: why did ticks multiply
that much recently?
~~~
HarryHirsch
_why did ticks multiply that much recently?_
It's because of warmer winters and an exploding deer population. People need
to take up hunting again to reduce the numbers of those tick motherships.
~~~
ethagnawl
> People need to take up hunting again to reduce the numbers of those tick
> motherships.
... or, as others have said elsewhere in the thread, restore a balance to the
ecosystem by reintroducing and protecting natural predators (wolves, foxes,
etc.).
~~~
HarryHirsch
You'd think that putting wolves and bears into suburbia would attract even
more resistance than bowhunting from tree stands. Something must be done about
the deer overpopulation, they raid peoples' gardens, they prevent forest
regeneration, they are a pest. Maybe we could start by withdrawing from the
suburbs.
------
kaycebasques
I remember standing in line at a coffee shop, after walking some dogs on a
nature trail. I felt a sharp pain on my hip flexor and knew right away it was
a tick bite, even though I had never had one before. I rushed into the
bathroom and lo and behold, it was a tick. In a panic I asked all the women in
the coffee shop if they had tweezers. They must have thought I had a screw
loose. None of them did so I rushed home to pull it off. I kid you not, as I
pulled it, I heard its jaws (or whatever) snap. P.S. you don’t need to burn
them unless they’ve already burrowed.
~~~
bryanlarsen
I had dozens of ticks as a kid. Lyme disease hadn't spread to Canada yet, so
it wasn't a big deal. I never once felt them bite; they produce an anesthetic
so you don't feel it. Sometimes you could feel them crawling on you, but most
of the time you didn't even feel that.
And you didn't hear the jaws snap when you pulled it off, you heard the mouth
parts tearing. You have to be careful pulling a tick off or the mouth parts
will stay in your skin, causing irritation and inflammation.
~~~
nicolaslem
You should avoid pulling, instead hold it with a tool and turn it. After about
two or three turns it usually comes right off.
~~~
nate_meurer
I've heard of such a tool, but never seen one. Otherwise, twisting a tick
using tweezers or your fingers is a great way to break its head off under your
skin.
~~~
_Microft
There are "tick cards", sized like a credit card with narrowing cuts which you
move long the tick until it comes off. Also very practical.
[https://www.amazon.de/s?k=tick+card](https://www.amazon.de/s?k=tick+card)
------
toss1
Might be time for a CRISPR Gene Drive to eradicate ticks in some zones.
Plausibly more critical than mosquitoes.
Obviously we need to be very careful about trying to rebalance what we've put
out of balance, and study to be sure that we are not eliminating a critical
food source for other links in the food web.
[https://psmag.com/magazine/deleting-a-species-genetically-
en...](https://psmag.com/magazine/deleting-a-species-genetically-engineering-
an-extinction)
------
4restm
The advice we were given in my veterinary entomology course was to tweeze as
close as you could from the base and pull perpendicularly to the skin.
You shouldnt yank it, more along the lines of the force you'd use to open a
zipper.
~~~
yread
I've had 100+ ticks always removed them using wet cloth with a tiny bit of
(solid) soap turning counter-clockwise. Somehow clockwise never worked for me,
counter-clockwise it was out after the 2nd rotation. We had both lyme and
tick-borne encephalitis in the family, not too bad if caught early.
------
TimTheTinker
Shout-out to Nicholas Zachas (@slicknet,
[https://humanwhocodes.com](https://humanwhocodes.com)). Thanks for the great
JavaScript books, and praying you feel better!
~~~
hinkley
CTRL-F Nicholas
Yeah, he's mentioned once or twice that his is so bad that he can't even write
reliably, let alone do speaking gigs or consulting work. Stuff like this
scares the shit out of me.
------
Chazprime
This isn't a joke. In New England where I live, in the early 2000's, car
accidents involving moose had become so common that Vermont attempted to halve
the moose population by increasing hunting permits.
Today, the population in New England has been shockingly decimated by ticks...
researchers are pulling upwards of ten thousand ticks off of dying moose.
They've been bled dry, thanks to warmer winters and reduced tick dormancy
periods.
~~~
abledon
Wow, "first dominated by dinosaurs, then humans, the third age of the earth
arrived in 2050, dominated by ticks"
------
khawkins
Bullshit, they're not exploding. Lyme disease confirmed cases are occurring at
roughly the same rate in 2017 as 2007. Look at the CDC data yourself:
[https://www.cdc.gov/lyme/stats/graphs.html](https://www.cdc.gov/lyme/stats/graphs.html)
~~~
switch007
You can call "bullshit" by picking a random year, sure.
I can pick 2010 (22,561) vs 2017 (29,513)
Their data set begins at 1997 and ends in 2017 though:
1997: 12,801
2017: 29,513
~~~
khawkins
Exploding doesn't just mean increasing, it means accelerating. No one would
look at that graph and say the rate is increasing. No doubt they excluded it
from their article, and instead chose to cherry-pick years like you're saying,
because it discredits their title.
------
biohax2015
Lyme disease is awful, and ticks are truly evil creatures. The intellectual
disabilities it causes are especially scary to me as a knowledge worker. It's
about time we start treating this as the public health crisis that it is and
targeting concerted efforts towards eradicating ticks.
------
baxtr
When we go hiking, we won’t leave the house without a tick card or something
similar, e.g tick squeezers. When we get back we scan everyone from head to
toe for ticks. At least Lyme disease can be prevented quite successfully if
the tick is removed within the first 24 hours.
[https://www.cdc.gov/features/lymedisease/index.html](https://www.cdc.gov/features/lymedisease/index.html)
------
brainflake
It's too bad LYMErix was taken off the market due to anti-vaccine marketing in
the 90s...
[https://www.vox.com/science-and-
health/2018/5/7/17314716/lym...](https://www.vox.com/science-and-
health/2018/5/7/17314716/lyme-disease-vaccine-history-effectiveness)
[https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2870557/](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2870557/)
~~~
castratikron
Surprised this isn't well known. I found out on my own a few months ago and
nobody I've talked to about it since knew that this was available.
I would definitely pay money for this if they brought it back. Probably up to
hundreds of dollars.
------
emmp
I grew up in a Lyme hotspot. Every early outdoor memory of mine involved
checking for ticks afterwards.
My younger brother had the bullseye as a very young child. My mother had a
different very debilitating tick borne illness, Babesiosis, but was only
treated for Lyme for months.
I had a blood test when I was 20, and was told I was positive for Lyme. I had
zero symptoms of Lyme, and had been spending most of my time living out of the
region at college, though I had complained of some general fatigue. They put
me on doxycycline, and I dutifully took it. I still don't know whether I
actually had Lyme. My father, a chronic hypochondriac, was put on a Lyme
regimen at one point too, though it was never clear exactly what symptoms he
had.
I'm not entirely sure the point I'm getting at with these anecdotes. Doctors
in Lyme hotspots may diagnose Lyme too readily. More specifically, they may
order too many blood tests, which from my naive research do not look to be
especially accurate for Lyme.
------
Willson50
[https://outline.com/CXBe5a](https://outline.com/CXBe5a)
------
joyjoyjoy
Don't know about US products. But in Europe, this is one of the few ones
guaranteed to work:
[https://www.amazon.co.uk/Anti-Brumm-
Forte-150-ml/dp/B006ZL4H...](https://www.amazon.co.uk/Anti-Brumm-
Forte-150-ml/dp/B006ZL4HNY)
~~~
curtis3389
OFF! Deep Woods
[https://www.amazon.com/OFF-Deep-Woods-Insect-
Repellent/dp/B0...](https://www.amazon.com/OFF-Deep-Woods-Insect-
Repellent/dp/B019ZTXU2G)
------
L_226
One of my neighbours when I was growing up in Australia had apparently chronic
Lyme disease, however every doctor she went to told her that there was
indisputably "no Lyme disease in Australia" [0]. This went on for years.
Eventually she had to go on a trip to IIRC the UK to get prescribed the
relevant medication, which she smuggled back to Australia.
[0] -
[https://www.health.gov.au/internet/main/publishing.nsf/Conte...](https://www.health.gov.au/internet/main/publishing.nsf/Content/ohp-
lyme-disease.htm)
------
beenBoutIT
Foley's wrong about California's tick problems getting worse as the climate
gets hotter and drier. California's western fence lizard has a protein in its
blood that kills the bacterium responsible for Lyme and shares the exact same
climate requirements as the ticks. California doesn't get the tick unless the
lizard is there and the combination keeps Lyme disease in check.
[https://www.latimes.com/opinion/la-xpm-2013-aug-20-la-ol-
lym...](https://www.latimes.com/opinion/la-xpm-2013-aug-20-la-ol-lyme-disease-
lizard-20130820-story.html)
------
switch007
A few years ago I got bitten by something after being close to a deer but
never saw any kind of insect. I had a large inflamed area (one circle IIRC)
for a while, experienced mild flu symptoms for a few weeks, and then some
wrist pains a few weeks after that. Doctor wouldn't give me antibiotics.
I haven't had any physical symptoms since, but I'm pretty sure my mood and
cognitive ability has taken a hit. There are of course so many factors not
controlled for, but it's always at the back of my mind.
------
staunch
I was bitten by little nymph/larval (not sure which) ticks on Bay Trail in
Mountain View, CA. Had to stop riding my bicycle there (which I absolutely
loved) because I just can't afford to be debilitated by Lyme's disease.
I've become violently genocidal towards these little bloodsuckers. If it was
up to me, I'd wipe them out with almost any means, and at whatever cost ;-)
After looking into the anti-tick options I actually started fantasizing about
some kind of super lightweight "earth suit" (like an astronaut's space suit).
Something that you could put on, fully seal yourself in, and then just enjoy
the outdoors with reckless abandon. I know it sounds a bit crazy but I think
it could potentially be a transformative way of exploring the outdoors.
I love the idea of being rattlesnake/tick/spider/thorn-proof. I could sleep
outdoors in the middle of fields, under trees, in puddles, etc.
~~~
jerkstate
FWIW, lyme disease is exceedingly rare on the west coast..
~~~
staunch
Yes, but there are other infectious diseases spread by ticks. And most
importantly, I really don't want to be one of the unlucky few!
[https://www.cdc.gov/lyme/datasurveillance/index.html](https://www.cdc.gov/lyme/datasurveillance/index.html)
------
coopernewby
This new book, Bitten, describes some of the mystery surrounding the origins
of the disease in the US in the 1960s. [https://www.amazon.com/Bitten-History-
Disease-Biological-Wea...](https://www.amazon.com/Bitten-History-Disease-
Biological-
Weapons/dp/006289627X/ref=sr_1_1?crid=1H1E4KW1SW0I1&keywords=bitten+kris+newby&qid=1561500221&s=gateway&sprefix=bitten%2Caps%2C402&sr=8-1)
------
tolmasky
We actually have a vaccine for Lyme disease that was pulled due to the Antivax
hysteria. I believe an article was posted here on Hacker News before, but here
is a link to one otherwise: [https://www.vox.com/platform/amp/science-and-
health/2018/5/7...](https://www.vox.com/platform/amp/science-and-
health/2018/5/7/17314716/lyme-disease-vaccine-history-effectiveness)
We now live in a world where you can vaccinate your dog for Lyme disease but
not yourself.
~~~
cmrdporcupine
We don't vaccinate dogs for lyme disease. We give them a topical or oral
systemic insecticide that permeates their skin and bloodstream so as to kill
ticks on contact.
Dogs have short, and I guess less valuable, lives. So this is considered an
acceptable practice for prevention. And there are many many dogs with severe
reactions to these medications, including some fatalities.
Would you really want to do that to yourself?
~~~
ceejayoz
[https://www.zoetisus.com/products/dogs/vanguard-
crlyme/index...](https://www.zoetisus.com/products/dogs/vanguard-
crlyme/index.aspx)
> For vaccination of healthy dogs 8 weeks of age or older as an aid in the
> prevention of clinical disease and subclinical arthritis associated with
> Borrelia burgdorferi.
------
pastor_elm
The CDC says post-treatment Lyme disease syndrome or 'chronic lyme' doesn't
exist. While i am suspicious of this claim, I know people who say they have
it, and have subjected themselves to every treatment under the sun without any
success.
~~~
ceejayoz
> The CDC says post-treatment Lyme disease syndrome or 'chronic lyme' doesn't
> exist.
You're conflating two terms. PTLDS exists, in patients who had Lyme but
experience post-treatment symptoms.
"Chronic Lyme" is different, and is largely problematic because the folks who
believe they have it tend to have never actually _been_ exposed to Lyme in the
first place.
[https://www.niaid.nih.gov/diseases-conditions/chronic-
lyme-d...](https://www.niaid.nih.gov/diseases-conditions/chronic-lyme-disease)
------
richcollins
[https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4920391/](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4920391/)
------
ajudson
Are these all real cases or some psychogenic ones? I was under the impression
that a lot of "chronic Lyme" cases were something else.
~~~
msla
There's no such thing as "chronic Lyme disease":
[https://www.fasebj.org/doi/10.1096/fj.10-167247](https://www.fasebj.org/doi/10.1096/fj.10-167247)
[https://sciencebasedmedicine.org/does-everybody-have-
chronic...](https://sciencebasedmedicine.org/does-everybody-have-chronic-lyme-
disease-does-anyone/)
~~~
megous
I didn't read the links, but I've read a lot here
[https://lymescience.org/](https://lymescience.org/)
and I also almost fell for "chronic Lyme" few years ago. There's something
seriously unsettling around websites/online communities dedicated to chronic
Lyme. From Lyme friendly doctors, to people doing and interpreting their own
tests, to people ascribing all kind of ailments to this "condition", etc.
~~~
ajudson
If you want to go down another rabbit hole, read about Morgellons
~~~
megous
Uh. Interesting one.
------
agumonkey
Interviewee is wrong about movies. They exist and they're high grade Z movies.
------
pfdietz
Well that's certainly an alarming symptom.
------
hanniabu
Maybe it's wrong, but I'm happy that cases are exploding because that
(hopefully) means there will be more research into better treatments and
preventions.
------
jaequery
This scares me.
------
Kenji
Why is nobody eradicate ticks with genetic engineering? Ticks are a species
that deserves to be eradicated. The health implications are vast and they
don't contribute much to the ecosystem. They're a literal plague.
------
newswriter99
Jack MacReady: It's obvious the bastard's got lyme disease!
Bill Pardy: What?
Jack MacReady: Lyme disease. You touch some deer feces, and then you... eat a
sandwich without washin' your hands. You got your lyme disease!
Bill Pardy: And that makes you look like a squid?
------
gregoryexe
This article blames sprawl and climate change, but neglects the connection to
the Plum Island Animal Disease Center right off the coast of Lyme CT. Borrelia
has been in the US for ages, but wasn't known for causing the debilitating
symptoms this modern strain causes.
~~~
ceejayoz
[http://www.cnn.com/2004/SHOWBIZ/books/04/02/lab.257/index.ht...](http://www.cnn.com/2004/SHOWBIZ/books/04/02/lab.257/index.html)
> Moreover, a Department of Agriculture spokesperson, Sandy Miller-Hays, told
> the news service that -- counter to Carroll's claims -- Lyme disease was
> never studied at Plum Island.
As for symptoms, it was described in the 1760s as "exquisite pain [in] the
interior parts of the limbs", neurological symptoms in the 1920s, etc. That it
took a while to recognize the _cause_ of these things doesn't mean the disease
didn't exist.
[https://en.wikipedia.org/wiki/Lyme_disease#History](https://en.wikipedia.org/wiki/Lyme_disease#History)
~~~
ColanR
I think that a government spokesperson should not be considered a reputable
source when speaking on potential misdoings by the same government. Is there
an independent source that can speak to the Plum Island claims?
~~~
ceejayoz
Given that it's impossible to prove a negative, the onus is on conspiracy
theorists to provide evidence that Lyme research was performed there.
~~~
ColanR
15 years ago, conspiracy theorists were being asked to prove government
spying. It's hard, and the coverup might leave only circumstantial evidence,
but it doesn't mean its not true.
Either way, if you're trying to provide a source against the point being made,
it needs to be better than that one. Is there any other?
Edit: I did a bit of looking into it.
Tick-borne diseases were definitely studied there, which means general subject
was being researched. (a bunch of pubmed studies are listed here [1], and
pubmed shows in the author information that they were from Plum Island. Ignore
the site if you like, just look at the pubmed articles.)
The geography of the disease also fits a spread from the Plum Island facility.
See the same article for a map.
I obviously can't go order and read a physical book off the cuff, but it seems
like the Lab 257 book by Carroll is decently reliable, as far as can be
expected when making claims counter to an official government position. From
wikipedia: "The review in Army Chemical Review concluded 'Lab 257 would be
cautiously valuable to someone writing a history of Plum Island'".
Anyway, in summary, there is enough evidence to ask a reasonable question:
which puts the onus back on the government, or you, to provide credible
evidence that Lyme was not developed there.
~~~
ceejayoz
> The geography of the disease also fits a spread from the Plum Island
> facility. See the same article for a map.
No, it doesn't, considering it was documented in Scotland in the 1700s. The
Plum Island facility was started in 1954.
> From wikipedia: "The review in Army Chemical Review concluded 'Lab 257 would
> be cautiously valuable to someone writing a history of Plum Island'".
The _actual_ quote is:
> The review in Army Chemical Review concluded "Lab 257 would be cautiously
> valuable to someone writing a history of Plum Island, _but is otherwise an
> example of fringe literature with a portrayal of almost every form of
> novelist style. "_
In other words, "it gets the biographical stuff mostly right, before it goes
nutty".
> Anyway, in summary, there is enough evidence to ask a reasonable question:
> which puts the onus back on the government, or you, to provide credible
> evidence that Lyme was not developed there.
Lyme's historical record predating the very existence of the lab is fairly
conclusive proof that it wasn't developed there.
~~~
gregoryexe
> No, it doesn't, considering it was documented in Scotland in the 1700s. The
> Plum Island facility was started in 1954.
Yes, it does. This strain was before unseen in the US so much so it's very
name references where it was first seen in the US, regardless of where it may
have been elsewhere in the world.
|
{
"pile_set_name": "HackerNews"
}
|
This Mediterranean diet study was hugely impactful. The science just fell apart - nsstring96
https://www.vox.com/science-and-health/2018/6/20/17464906/mediterranean-diet-science-health-predimed
======
urlwolf
I'm struggling with the 'what can I eat' question. I'm mediterranean. I've
tried plenty of fad diets.
One unorthodox way to check the effect of a diet: "Did you feel any better?"
I'm currently vegan. I don't fell better. The only time in my life when the
diet seemed to have an effect I could notice in mood and energy levels was
when I was 'raw'.
~~~
bachbach
What would you say is the explanation for this?
|
{
"pile_set_name": "HackerNews"
}
|
If P vs NP formally independent then NP has very close to poly-time upper bounds - amichail
http://www.cs.technion.ac.il/~shai/ph.ps.gz
======
gjm11
This one may merit a summary, since there are a few subtleties. So, here goes.
Background: Ever since Goedel, we know that in any given system for doing
mathematics some statements are _neither provable nor refutable_. So, if after
much effort mathematicians fail to find either a proof or a disproof for some
conjecture, it's natural to wonder whether perhaps that's because neither
exists.
In some cases, an undecidability result of this kind would be pretty much
equivalent to an actual decision. For instance: Goldbach's conjecture says
"every even number >= 4 is the sum of two prime numbers". If this is
undecidable then, in particular, I can't write down an explicit
counterexample, so it might as well be true.
OK, so what about P=NP? Well, what David and Halevi have done is to show
_something a bit like_ the following: "If 'P=NP' is not decidable using the
axioms of Peano arithmetic, then any family of decision problems that's in NP
is 'almost in P'". But there are some fiddly details that might matter.
Detail #1: it's not actually "is not decidable", it's "is _provably_ not
decidable". The distinction between these two is important to anyone who's
interested in this stuff in the first place.
Detail #2: it's not even "is provably not decidable", it's "is provably not
decidable, where the undecidability proof is of a particular kind". They claim
that "any known technique for proving independence from sufficiently strong
theories of statements that are neither self-referential nor inherently proof-
theoretic" is of this kind. Since the whole field of independence proofs got
started when Goedel worked out how to make things "self-referential" that on
the face of it look like _completely the wrong kind of thing to sustain self-
reference_ , I can't help but be a bit unimpressed by this.
Detail #3: What they mean by "almost in P" is this: there are arbitrarily long
intervals of arbitrarily large numbers such that any problem whose size lies
in one of these intervals can be solved in almost-polynomial time -- i.e.,
O(n^f(n)) where f grows very, very slowly, in particular more slowly than log
log ... log n with any number of "log"s.
------
wcarss
<http://users.socis.ca/~wcarss/ph.pdf>
a bit more usable (article in pdf format)
~~~
roundsquare
Is it just me, or is this backwards?
------
amichail
Also see: [http://blog.computationalcomplexity.org/2009/09/is-pnp-
ind-o...](http://blog.computationalcomplexity.org/2009/09/is-pnp-ind-of-zfc-
respectable-viewpoint.html)
From comment 2: _If P!=NP is independent of Peano Arithmetic, then "almost"
P=NP._
~~~
gjm11
That's referring to this very paper.
------
Devilboy
Sorry to be off-topic but is this an audio or video link or something? I can't
work out how to open it...
~~~
coryrc
It is gzip-compressed postscript. The following commands could come in handy:
gunzip (or gzip -d)
gv
ps2pdf
~~~
Devilboy
Er so I have to download a utility to gunzip it, and then download another one
to convert it to a PDF? That's pretty annoying.
EDIT: I'm just saying, if you're linking something to HN where hundreds of
people will click on it, isn't it more efficient and courteous to convert and
re-host it in a format that most people can read?
~~~
Rantenki
You want to read a mathematical proof about computability, and you need
somebody else to gunzip it for you first? LOL!
|
{
"pile_set_name": "HackerNews"
}
|
Anyone else thinks that Whiteboard interview is just covered ageism? - zerogvt
It's the second time in a few months I'm being turned down with the pretext of a failed whiteboard interview. Things like improper syntax and not getting the damned recursive solution fast enough.
Given that I am 42 yrs old and been at this line of work for 14 yrs now I think it's safe to assume that I neither have the time nor the appetite to constantly exercise on solving mind puzzles in whiteboard. I am good at what I do -and I do it at a top level company- but it has nothing to do with coding on a whiteboard. I'm sure that anyone who is a few years _out_ of the university and _into_ a real job finds it both hard and surreal to go through these hoops to land a job. Whiteboarding simply tests for skills that are not needed nor exercised once you're out of uni<i>.<p>Thinking all that it then dawned on me. Maybe this abomination is just a way to take out older candidates and favor young ones. A form of ageism that is legally safe for the company.<p>Dunno - what's your thoughts?<p></i>By whiteboarding here I mean testing the form of questions one can find in places like hackerank and the like. Obviously, drawing a large system design or using a whiteboard as an aid to describe/analyze other aspects of a system is not the topic I'm touching on here.<p>PS 1: I'm done with that sh1tshow myself. I sincerely hope I'm never that desperate to put myself through that again.<p>PS 2: For what is worth here's a repo with all companies that do not use whiteboarding: https://github.com/poteto/hiring-without-whiteboards
======
rmah
I'm even older than you and have no problem with using a whiteboard during
interviews. That said, interviewers who get hung up about precise syntax are
poor interviewers and need coaching. Or are just looking for an excuse to ding
someone. More to the point, I don't see how asking to put answers to technical
questions on a whiteboard favors younger people over older people.
~~~
ryanisnan
I agree. What would age have to do with recalling syntactic idiosyncrasies, or
being able to theorize about problems with a dry-erase marker and a whiteboard
(or a pen and pencil, or a keyboard and a computer)?
Reading between the lines, you seem upset about having to answer questions
about technical problems you perceive to be irrelevant, and that these
technical problems are more likely to be solved successfully by those who have
recently practiced them (e.g. graduates).
I too agree that, while abstract technical interview questions have little
bearing on most day-to-day work, they do some things quite well:
1 - Define a well-bounded problem of sufficient difficulty 2 - Give the
interviewee a good baseline for objective success 3 - Exercise a person's mind
to think about complicated problems
While they are contrived and not entirely representative of daily work, they
are not without value.
~~~
asark
I think most of the dislike comes from the tendency of typical whiteboard
questions to favor someone who happens to have seen a very similar problem
recently, the same way someone who comes up with the correct answer to a brain
teaser very quickly is usually someone who's seen it before. Many rely on
that, in fact, since otherwise they'd be asking people to come up with a
publishable (once published, perhaps) finding, under pressure, on the spot, in
maybe an hour.
This might be fine if the "very similar problem" is something you'd likely
have encountered in your work, but often they questions are drawn from a pool
that _most_ people in this line of work see rarely if at all, and if they do
it's likely to be some very small subset of the questions, so dedicated study
of the remainder of the pool still puts one at a large advantage, regardless
how useful it is in doing your actual work.
They're measures of "how bad you want it" (how much of your time you spent
memorizing stuff you don't actually use to prep for the interviews) and/or how
recently you took an algorithms course. And maybe those are things worth
measuring, I dunno. Maybe the absence of strong enough signals on either of
those is important enough that it makes sense to use them to reject people who
are otherwise very capable of doing the actual work.
~~~
username90
> Many rely on that, in fact, since otherwise they'd be asking people to come
> up with a publishable (once published, perhaps) finding, under pressure, on
> the spot, in maybe an hour.
In much of science the hard part is asking the right question rather than
coming up with the solution, so figuring out these algorithms is not
equivalent to making publishable research.
------
ohaideredevs
"It's the second time in a few months I'm being turned down with the pretext
of a failed whiteboard interview. Things like improper syntax and not getting
the damned recursive solution fast enough."
Is it a pretext, or did you actually fail the interview? I want to work for
West-coast-pay company at some point, and it seems that the idea there is for
me to spend 6 months learning stuff I will never use, so I can compete with
the kids who spent 4 years learning mostly stuff they will never use.
That is, if I fail it, it's not because I am older, it's because I don't know
stuff fresh grads know.
IT is full of grinding pretty meaningless stuff (especially at lower levels),
as much as we romanticize it.
~~~
ziddoap
> _That is, if I fail it, it 's not because I am older, it's because I don't
> know stuff fresh grads know._
I believe this is exactly the point that the parent poster is making. You
don't know the stuff fresh grads know _because_ you're older (obviously this
isn't an absolute, but likely). Therefor, structuring the interview around
stuff that only fresh graduates are likely to be up-to-date with would be
discriminating against older people.
If you fail, it _is_ at least related to you being older, if the focus of the
interview was designed around hiring fresh graduates.
~~~
munificent
College dropout here. Young people sometimes don't know this stuff either.
This is probably less ageist than it is education-ist, which is not too far
away from classist.
Personally, I do think it's worth testing candidates on these pure CS skills,
even though I myself didn't have them and had to study before I interviewed at
Google. What I've found since then is:
1\. Surprise, surprise, I actually have used quite a few of these concepts in
my work. My experience may not be typical, but my role really does benefit
from my having a better grounding in algorithms than I did before.
2\. When communicating with other people at the company, it is _very_ helpful
to be able to presume a baseline understanding of algorithms, data structures,
and big-O. A lot of code reviews and design discussions are easier and faster
when you can just say "yeah, but that's O(n^2)" or "BFS would let you early
out more frequently here".
As an interview technique, I also think there is some value in testing an
arbitrary skill a candidate might not have, because it's a good gauge of
hustle and discipline. Yeah, learning algorithms is a chore and a hassle.
But... a lot of shit you have to do at work is a chore and a hassle.
If the interviewer can see that you're able to make yourself do that for the
interview, it's a good sign you'll have the discipline to do some of the
grunge work that is inescapable in the software field.
~~~
srfilipek
I agree with your first points, but...
> Yeah, learning algorithms is a chore and a hassle
> If the interviewer can see that you're able to make yourself do that for the
> interview, it's a good sign you'll have the discipline to do some of the
> grunge work...
This doesn't make any sense, unless very, very specific bounds are put on the
interview questions beforehand...
Without that, what is a candidate to do? Memorize all known data structures
and algorithms?
~~~
munificent
_> Memorize all known data structures and algorithms?_
No, but you should know the classics. That's kind of the "general contract"
for how these big tech companies interview. Most also proactively tell
candidates what material they should expect to be interviewed on, like:
[https://careers.google.com/how-we-hire/interview/#onsite-
int...](https://careers.google.com/how-we-hire/interview/#onsite-interviews)
A good interviewer is not aiming to ask gotcha questions where if you don't
know that one specific weird algorithm for that one specific data structure,
you're entirely hosed. That provides almost no useful signal to the
interviewer.
But they will ask questions where some well known data structure is part of
the solution and then provide guidance as needed based on what you seem to
know.
------
pdpi
If you're writing code on a whiteboard in an interview, syntax, library calls,
etc, should not at all be part of the evaluation process. You dodged a bullet
on that one.
Being able to "get the recursive solution fast enough"? Depends a lot on the
expectations, might or might not be an eliminating criterion. Definitely a
problem e.g. for functional-heavy environments, where that style of reasoning
is expected to be your bread and butter.
In the general case, I've seen fairly senior developers crash and burn on
basic programming interviews (be they on a whiteboard, on a laptop, or
whatever other format) due to genuinely weak programming skills, so I don't
agree with the assumption that some candidates are "above" these interviews.
Also, a lot of seemingly pointless questions are good questions phrased wrong.
E.g. I will never ask you to implement depth-first tree traversal on a
whiteboard, but will ask to pretty print a directory structure, and make a
note of whether a candidate notices this actually _is_ depth-first traversal
dressed up as a practical day-to-day problem.
Of course, just because the interview format is not fundamentally flawed
doesn't mean that plenty of companies don't mess up implementing it in
practice...
Then again, your PSes suggest you don't want a reasoned discussion so much as
you just needed to get that off your chest, which is fair enough.
------
Kapura
In my experience, whiteboard interviews are absolutely a form of gatekeeping.
Whether the firms know it or not, they are excluding candidates who would have
succeeded in the role by including weird unrelated side content in the
interview. As the OP rightly mentions, nothing you program on a whiteboard has
any application once you are actually in the job. It can be incredibly
frustrating to know that you can do the work of a job, but be denied because
of an old superstitious test.
The only silver lining is that, ultimately, companies hire the candidates they
interview for. If a company makes its hiring decisions based on trivia
quizzing and whiteboards, they'll ultimately produce software that reflects
that. During my last big job search, I eventually started asking upfront if
there was whiteboard coding in the interview. I don't want to waste my time.
------
gfodor
Beyond whiteboard based coding being a bad medium, on-the-spot programming
exercises are a super noisy, poor medium for assessment compared to the
alternative: a work sample in the form of a 4-8 hour programming exercise done
by the person on their own terms. No exploding deadline, no restrictions on
tools other than those necessary to show the person's relevant job skills.
Make a bunch of them, put the time into them, and let the candidate have a
chance to pick from a library of exercises that are interesting and fun to do.
The main downside to a work sample is that a lot of people will say they
simply don't have the time to do it. That's fine, you'll lose some people, but
you'll also have people applying who really want to work for the company. From
there, you'll have a small number of people who do a great job on it. By the
time they are in your office interviewing, the idea that you'd reject them
because of a slip-up on a small coding exercise on a whiteboard is absurd,
because you already have a mountain of evidence regarding their skills as a
programmer. The in-interview exercises will be there to test other skills like
communication and collaboration, not writing good programs.
~~~
munificent
I think there's a good argument that larger-scale work-sample exercises like
this are even _more_ ageist because older candidates are more likely to have
family commitments that give them less free time to allocate for things like
this.
~~~
gfodor
Frankly, I've heard this before and I disagree. (I'm a parent.) It does mean
you have to be selective with regards to what companies you apply to: you
don't have infinite time (and nor does anyone.)
It's a catch-22, however. I look for companies that have good interview
processes, because it means they are going to have good employees. I feel
pretty strongly that good skill assessment is a critical component to hiring
good people. I don't think you can hire programmers based upon resumes,
references, or 'track record' alone. I think you need to assess their skills
in a very deep way during the interview process, since there are a lot of
candidates who look good on paper and can talk the talk (deliberately) but
when actually put to work aren't up to the job.
So, I look highly upon companies that actually provide a good forum for
showcasing my skills: not just because it means my interview will be fair, but
it also means that the people I'm working with will have had their skills
assessed in a similar way. I'm willing to make that tradeoff in time and
prioritization. It's basically this tradeoff: unless you think you can hire
good programmers without a assessment, you either should expect to be assessed
(and should look kindly upon assessment mechanisms you feel do a good job of
assessing your skills, like a work sample), or you should expect to work with
less skilled programmers, on average, if you get the job.
edit: should add, I love your game programming book! :) thanks for writing it.
~~~
munificent
Yes, I agree with you overall. I'm not crazy about the algorithms-on-a-
whiteboard process, even after having been on the other side of the table many
times. I have very little confidence on my ability to judge someone's
suitability as an engineer based on their performance at a whiteboard.
At the same time, I just wanted to point out that deeper interviews have flaws
too. The time commitment is a real challenge, in particular for people that
have kids or are in an economic place where they may be working multiple jobs.
You don't want someone's inability to commit the time to interview to
inadvertently filter them out. Especially because the people who have the
least time are often the ones most in need of a good-paying, stable software
job.
_> edit: should add, I love your game programming book! :) thanks for writing
it._
You're welcome! :)
------
giancarlostoro
I tend to avoid places that whiteboard, the one time I did have to do it I
sucked it up only because a friend stuck their neck out to get me the
interview. I was told syntax didn't have to be perfect. Yeah I think they may
come up with stupid excuses basically to not hire you.
Honestly, interviews go two ways: you figure out if you _really_ want to work
there, and they do as well. If they don't want you, assume it's for the best,
you would of had to deal with worse: coworkers who hate your personality or
have a toxic personality.
~~~
padobson
_you figure out if you really want to work there, and they do as well_
I don't think this can be overstated. If you don't enjoy the interview
process, you probably don't want to work there.
At the same time, interviewing is often the only way you can find out if a
company is really serious about hiring. In my experience, most companies with
organized HR have certain positions that they are "always hiring". Which isn't
really true, they just want to have the light on in case a rock star happens
to be looking for a job.
I've been on a bunch of interviews where it was clear neither the company or
hiring manager was enthusiastic about finding someone to fill a position, but
that HR was pressuring them to keep the light on.
~~~
bquinlan
_I don 't think this can be overstated. If you don't enjoy the interview
process, you probably don't want to work there._
I disagree with this. The interview that I enjoyed the most was with founders
who took me out for beer. We talked a bit about software but most of the
discussion was small talk.
After a few months of working there, I realized that some of the people in the
office didn't really have any qualifications except "liking beer" and it
wasn't a professionally interesting place to work.
My least pleasant interview experience was with Google but I enjoy working
there.
------
xenocratus
I was not prepared when I had my last interview like this, but if I ever go
interviewing again I'll make sure to have my own algorithmic problem at hand
(to which I'd know by heart all the tricks and improvements).
Then at the end of the interview I'll ask the interviewers to solve the
problem (or just give a description of how it would work). Point being - if
they interview you on this stuff but can't do it themselves without knowing
the solutions before, then how could they reasonably claim to be assessing
you? And would you want to work for someone who does this to potential
employees?
It sometimes seems like these interviews are d __k measuring contests between
the two parties.
~~~
crimsonalucard
That's perfect. If you think an interview went badly. Give them an algorithm
problem at the question session that very likely they won't be able to solve.
That's a perfect way to show them their bias.
------
gfodor
It amazes me people still actually do this. Why would you, in 2019, not just
ask a person to bring their laptop with their preferred development
environment set up on it and ask them to solve some basic coding exercises
there instead.
Physically writing code with a marker on a wall seems akin to asking a
mechanic applying for a job to demonstrate their skill at repairing cars by
performing a 'repair' on a miniature car made of lego bricks.
Edit: one thing I noticed once I started having candidates work on their own
laptop was a) some very unqualified people slipped through the screen and this
can be obvious when they have no programming tools on their personal machine
and b) you can learn a lot quickly from seeing someone use their own machine
-- you get a clean signal if they are adept at using their text editor, git,
build tools, etc, without risk of a contrived setup making it a false
negative.
~~~
rolltiide
Square’s interview was like that a few years ago. Even has pair programming
with the interviewer. Most fun I had in an interview!
Still stings when you thought you had a positive experience but they didnt
think so. Only other time I had that experience was with first dates that I
thought went well.
~~~
gfodor
Yep nobody likes getting turned down by a potential employer, but it stings
'less' if you feel like you were treated fairly and given an objective, un-
biased forum to showcase your skills. Nothing is more demoralizing than being
able to replay in your head the point in the interview where you got 'dinged'
for something ridiculous due to a quirk of the process like marker-based-
programming and feeling that sent you on the trajectory for a rejection. In
those scenarios, it can even be a self-fullfilling prophecy, since if you are
intimidated by marker-based-programming it can just lead to little mistakes
due to nerves that ultimately cost you the job.
~~~
rolltiide
> Nothing is more demoralizing than being able to reply in your head the point
> in the interview where you got 'dinged' for something ridiculous due to a
> quirk of the process like marker-based-programming and feeling that sent you
> on the trajectory for a rejection.
Honestly, I'm so desensitized to that so I don't feel that way. I go in
expecting a random brain teaser that I BS my way through.
I actually land most jobs by recycling interview questions and answers from
prior interviews.
"Let me know if you've seen this question before" NOPE
But more companies have also done more core competency related exercises with
IDE's set up or take home exercises. Haven't been paid for a take home
exercise yet, but I'm hearing thats happening a little more too. I think a lot
of startups in the bay area are content that their employees will stay for
around 18 months, so they dont need the brainteaser rationale that they need
to interview an engineer on the idea that they change teams.
------
40acres
I don't have a problem with whiteboarding, I collaborate with teammates all
the time to design rough sketches and even bits of code on the whiteboard. The
only critism I would have is if the interviewer is overly strict on things
like syntax and API names.
White boarding questions should be related to the fundamentals of computer
science and programming. Trees, hashes and arrays have been and will be around
forever, it's fair game in an interview setting.
~~~
crimsonalucard
Right but these questions can get mind bending. Like count the amount of ways
to make change for a dollar. That's a basic tree question.
------
stcredzero
_Anyone else thinks that Whiteboard interview is just covered ageism?_
It really depends on who is interviewing you. It can be. Even the Google
Hangouts, screen-sharing one can be. I've definitely had that sense from one
interview.
_Given that I am 42 yrs old and been at this line of work for 14 yrs now_
I'm a bit older than you, and have just a bit more experience, so I think I
know where you are coming from.
_Whiteboarding simply tests for skills that are not needed nor exercised once
you 're out of uni._
Again, this greatly depends on who is applying the technique and how it is
being applied. It _can_ be applied to see if the person can actually do the
kind of systems thinking to put a new design together and make it specifiable,
such that someone could go and implement it.
That said, I've also taught courses to high school, college, and professional
students, and I think the skill set for interviewing and the difficulty of
learning good application of the techniques is of the same scale as teaching.
In other words, don't expect to get good outcomes by just giving interviewers
a few seminars, then telling them to go at it. You'll get the same level of
interviewing as the level of teaching you get by doing the same thing to TA's
with no experience.
The biggest single issue I've seen in whiteboard interviews, is the
interviewers being hyper focused on what they want to find, and not listening
to what the candidate is saying.
------
southphillyman
Yesterday I read that Google actually decreases their expectations in the
white boarding exercises for experienced hires because they realize the
leetcoding techniques will not be as fresh in an experienced dev's mind. The
expectations are higher in regards to system design and resume questions. That
seems to be fair imo.
Later in the material it said that they reject over 80% of people who make it
onsite. The bar at FAANGs is just relatively high regardless what kind of role
you are trying to get. My mentality is preparing for high bar whiteboard
interviews best case lands a job at a FAANG and worst case makes it incredibly
easier to land a job elsewhere at a company that doesn't rely on whiteboarding
or has a lower bar in general. That's to say investing in whiteboard practice
has little downside and has an acceptable time/reward ratio given the results
if you are even just ok at it. In terms of ageism whiteboarding is not
required to filter older devs out. Resumes viewing can do that. Why would they
waste 6+ hours of their expensive Sr. devs time to put someone through a
pointless exercise? Serious candidates will commit the same 1-3 months of
study whether they are 23 or 43.
~~~
zerogvt
Great to hear that. I went through that hell (of course fell into the 80%) a
few moths ago. Needless to say I was in total awe by the whole process. Awe as
in, what the heck did I waste whole weeks for? How can the -seemingly best IT
company- hire like that? Funniest part was that the recruiter giving me the
bad news was assuming that I'd spent some more time in the future trying again
to get in g. Like this is a life purpose or sth and she actually mentioned
exactly that. Most people don't get in at first attempt. Which is like saying
"we know we do it wrong because we hire the same people the second time
around" so what the heck is all that supposed to test? Perseverance? Anyway -
after that I had serious doubts that I'd ever be happy working there anyway so
I was pretty confident that there wouldn't be no second time.
~~~
twblalock
They hire that way because they are optimizing to avoid hiring bad engineers.
Their entire process is shaped around that goal. In such a process, passing on
good people is not considered a problem -- especially when they will reapply,
which they do.
------
notacoward
I think it's a bit in between. FWIW, at 54 I've never failed a whiteboard
interview, including at my current FAANG employer, so this is most certainly
not sour grapes.
While not _deliberately_ ageist, I think whiteboard interviews work out a bit
that way. What can be tested in such an interview? Only a sampling of domain
knowledge. It will tend to be a sample weighted toward the particular problems
and algorithms that will be super-fresh in the minds of people still in or
fresh out of school (usually because the interviewers themselves are). To
someone older but still well versed in that domain, those _particular_ details
might have been crowded out by a thousand other things learned since. They
might seem less familiar because of changes in language, notation, or idiom.
Those same differences will also affect how the result is "graded" even if the
candidate solves the problem quickly and well.
Whiteboard interviews also fail to measure other things such as ability to
select algorithms or higher-level approaches, people or organizational skills,
industry knowledge, or a developed "instinct" about what symptoms suggest what
problems in the relevant kinds of code. As more weight is given to whiteboard
skills, less weight is given to literally everything else.
As I said, I don't think any of this _intentionally_ disadvantages older
workers, but it can have that effect without intention. It has never hurt me
personally, but I have plenty of peers who I know beyond doubt could code
rings around the people who interviewed them. They just couldn't dance that
particular dance well enough in that moment and got rejected. That's a loss
for the (potential) employer as well as for them.
------
zucker42
Not saying that whiteboard interviews are perfect, but is there something
inherent to them that makes them favorable to younger applicants? Your main
argument to this effect is:
> Whiteboarding simply tests for skills that are not needed nor exercised once
> you're out of uni.
But I don't know if "whiteboarding skills" are any more useful _during_
college either. Also, I always thought explanations were the crucial part of
the interview, and I think technical explanation is an important skill.
That said, it may be a flawed practice. I'm open to arguments to that effect.
But so is everything else[1]. I'm interested to hear some thoughts.
[1]
[https://en.m.wikipedia.org/wiki/Goodhart's_law](https://en.m.wikipedia.org/wiki/Goodhart's_law)
------
dkasper
100% not ageism. Whiteboarding itself is just a skill to practice. I don’t
pick jobs based on the interview process, I pick the companies I wanted to
join and practice the skills needed to pass the interview. After 10 years in
the industry my whiteboarding was always terrible until my last round of
interviews where I decided to actually focus on it and I passed the interviews
at Facebook and Google. My problem was that I could always solve the problems
but I was too slow without the keyboard and the compiler to help. So I
practiced writing code without those tools and got faster. I think if anything
my experience helped, so I’ve come around on whiteboarding. It’s not perfect,
but any good coder can learn it and the bar is not really that high.
------
jcomis
Same with "culture fit" interviews imo. How people don't think they just
create massive bias is amazing to me.
~~~
pytester
"Not the right culture fit" is pretty much code for "we have reasons we'd like
to reject the candidate which we don't want to go into too much detail about".
The vaguer the reason for rejection the more suspicious I tend to be that it's
at least implicitly related to some sort of shameful prejudice.
~~~
erik_seaberg
Dinging a candidate on culture fit is very rare for me, and basically means "I
think some people would quit to avoid working with this guy."
~~~
pytester
IME when a candidate gets dinged for a reason that would make people quit to
avoid working with him (e.g. "he obviously doesn't shower") then people don't
tend to invoke the words "culture fit".
------
crsv
I do not think white boarding is a cover for ageism.
I think a group collaborative exercise where you're sketching, drawing, or
otherwise visualizing an abstract concept to explain a point, answer a
question, or justify a decision is a skill that talented, effective people can
leverage regularly to get great results at work.
I think testing for this skill is a really good idea.
I think that doing this in a way that's equitable and emotionally supportive
to the candidate takes thoughtfulness and effort, and not every company/person
in this process does this well.
I also think there's a great number of people who turn this concept in to a
boogeyman to rationalize their interpersonal or technical ineffectiveness.
I would use a whiteboard interview to evaluate a candidate (and have), I would
happily submit to a whiteboard interview (and have), and I think that all
things being equal, the noise around them mostly comes from people who perform
poorly on them and if I'm a betting man, a team made up of people who
performed well on an equitable, well structured whiteboarding exercise would
outperform one made of people who did not in the work context of building
software as a team.
------
closeparen
Some of the most impressive whiteboard/coderpad performances I’ve seen have
been from middle aged candidates. When someone has been using a language for
15+ years, their agility and fluency in expressing their algorithmic ideas in
code is incredible. Most candidates seem to understand and articulate a
solution by the end of the interview, but it’s the fluent ones who can prove
it with end to end working code.
------
passwordreset
Whiteboard interviews would be great if they tested the kind of things that we
would do on a whiteboard, namely design and _maybe_ pseudocode. If someone is
interviewing for an algorithm job, I'd expect they could write text in boxes
that might describe how the algorithm works. If someone is interviewing for a
front end position, I'd expect that they could draw some boxes describing the
UI, and depending on the underlying technology maybe add the info on the
containers and subwindows, or describing the MVC model, or high level actions
drawn with arrows that show what gets affected if some button or UI item is
pressed or selected. If someone is being hired for a networking job, then I
might expect to draw up some boxes that show a topology for some scenario and
identify where the security structures might need to be. For a straight-up
programming job, maybe I could see drawing flow charts or sequence diagrams or
something that might actually be useful to draw out. I would certainly not
expect to write code on a whiteboard. That's not what whiteboards are for.
------
jdblair
I've been on both sides of the whiteboard interview (candidate and
interviewer), and I've failed a good number of them. The last one I had as a
candidate was about 4 years ago, incidentally when I was 42 (I did get that
job and I'm still with the same company).
The best whiteboard interview evaluates a candidate for technical skills and
soft skills at the same time. I want to see if candidates ask questions and
can deal successfully with ambiguity as much as knowing the "right" answer. At
the same time I'm asking questions about their previous work, specifically
looking for concrete examples of how they balance the needs of different
stakeholders in a project.
The worst whiteboard interviews look to evaluate knowledge of a specific
algorithm, and don't provide help when the candidate is stuck. The very worst
is just a pretext to cover up the interviewers biases.
In my experience, the right company is out there for you, but as an older
candidate it can take waiting a long time for the right opportunity where you
can highlight the specific skills you've spent your life accumulating.
------
maximente
i wouldn't take it too personally nor read too much into it. the whiteboard
interview wants to extract signal ranging from "make sure this person isn't a
complete n00b/fraud" to really relevant problems that should probably be
eminently doable, depending on role. but due to a variety of factors it
doesn't do this very well. for many the biggest is social pressure where at
least 1 person is completely watching you work, which is stressful if novel or
you're shy. or lack of familiar tooling, or whatever.
yes there are myriad better ways to extract this signal but, as the saying
goes, you get to choose the game you play but not its rules. you've enumerated
one option which is refusing whiteboard interviews, but you could also level
up your whiteboard game without too too much effort. personally i don't think
it's time to reach for malice after 2 failed attempts at something i'm
inferring you haven't done in awhile.
------
jim_barris77
You’re suspicions and frustrations are valid and are based on your experience
alone. No one else here was in those interview rooms with you and if you
subconsciously picked up on the presence of an age bias you probably weren’t
pulling it out of thin air. In my experience in multiple industries as both
interviewer and interviewee I’ve felt similar biases and often had them
inarguably confirmed. And I’m somewhat ashamed to say that I’ve partially
acted on such biases as well in rare cases. Interviewees deserve better
treatment and more transparency, period. No one should have the right to exert
that kind of toxic power dynamic on a fellow professional. I’m 32 in a new
workplace in SF and I can already feel the subtle force of ageism lurking in
the subtext of interactions with coworkers. I appreciated this post, thank you
for speaking out on this. PS I love white-boarding.
------
unoti
There's a basic truth about being a knowledge worker that most people don't
think about. This truth holds for young, old, programmer and non-programmer
alike.
If you want to stay relevant and fresh long term, you need to be continuously
learning. You can get away with not worrying about ongoing learning in the
short term, say 8 years or less, but in the long term it's going to catch up
with you.
This is one key reason programmers tend to move to other careers or into
management after 10+ years: it's a lot of work to continuously learn. And
worse than that, it's uncomfortable and makes you feel dumb when you know
you're really smart and capable. Learning to embrace that feeling of being
dumb, of being a beginner, is key to growth. And once you stop growing and
learning, you start getting old.
For developers this means learning new languages, techniques, technologies,
methodologies. If we accept that this learning is going to be happening, then
it's not too much to ask that data structures and algorithms refreshers are
part of this ongoing learning say 1 out of every 3 or 4 years.
Most engineers actually don't do this, and just stick with what they've been
doing for the most part. Or just happen to passively learn whatever is rolling
by them in the course of doing their job.
For engineering, there are actually tons of things that need to be studied
that are technically not part of the job: writing skills, communication
skills, math skills. Sometimes these things help you even though they're not
part of the daily grind. The same goes for algorithms and data structures.
Studying these things also sends a signal to your interviewer: you're willing
to go figure it out even when it's not fun and not convenient. As a hiring
manager, I like that. And we can debate whether it's a good thing, but it is
what the situation is at many places. Plus, it's fun to study a lot of it, so
it's a good option to get on the learning.
------
rvz
> Whiteboarding simply tests for skills that are not needed nor exercised once
> you're out of uni.
In today's tech industry, these algorithm questions is now used as an effort
to trip up candidates and to always try to "See how you think" in an
unrealistic scenario. The questions I've been given and have seen are mostly
irrelevant and are always done for us in the libraries I use at work.
I had interviewed at one startup in the UK who asked me a algorithm question
that they admitted that they don't use and their excuse is the same: "To see
how you think." Not only this doesn't make sense if one has already seen this
problem, but the fact that they don't use it makes you wonder the real reason
why they are doing this.
The simple answer is: Because it works for FAANG (Who actually do use these
algorithms); Thus everyone else copies them. This is why hiring in this
industry is a complete circus.
> Things like improper syntax and not getting the damned recursive solution
> fast enough.
A problem statement like that is like: `Design and implement the optimal
solution to this 'problem' in X language in less than 15 minutes.`
This is a red flag for you as a candidate who is allowing room for the
interviewer to nit-pick you here to waste your time. It should never be that
important to worry about the specific language semantics here due to the time
limit. But again the interviewer is there to be impressed on how much the
candidate knows about the intricate details of the language + data structure +
algorithms all in 15 minutes which is nonsensical requirements for a typical
software engineering role.
Frankly speaking, the interview dance in the tech-industry is an excuse to
trip up candidates who have not practiced for more than 6-9 months on data
structures and algorithms and the latest tools that everyone is adopting. It
is so 'broken' that interviewing at FAANG companies is an entire industry
itself.
For them, whiteboarding like this is essentially trying to find a silver
needle in the sky.
~~~
twblalock
A lot of teams at FAANG companies don't use those algorithms either.
However, if you want to hire generalist programmers who are good at solving
problems and can work on a variety of projects, it's perfectly reasonable to
want to see how they think.
------
rgbrgb
I've only done whiteboarding in interviews for system diagram type stuff and
explaining high level ideas. At my job, this is mostly what we use the
whiteboards for in day-to-day work. Some people say they built a system, but
when you ask them to draw the boxes and arrows to describe how data flows, it
becomes obvious what pieces (if any) they actually touched. I think the people
who do best in these are either more senior in their careers or have done a
lot of side-projects from scratch. They have either seen a system evolve over
time and solved scaling bottlenecks or they've set up many systems with a few
different stacks and have thought about what the pieces are for.
------
ebiester
It depends. If you're talking about whiteboarding code, I only have to do that
once a year or so, usually because I'm in a meeting and want to demonstrate
what I mean and code is easier. If you're talking about whiteboarding a
design, I do that much more frequently. I prefer a pair programming exercise
in this case, but I've reverted to whiteboard in cases where there were
technical issues. It's not my favorite but neither is trying to do it in a
text editor without support.
I see nothing wrong in having to show a high level design of a system as a
senior engineer, because we are asked to do that quite often.
------
jasonmcaffee
I'm older as well, but I usually ask my candidates to use the whiteboard to
workout solutions. It demonstrates that you can communicate effectively, as
well as think on your feet :) I don't think it should be all about coding
riddles, but I also don't think you should be surprised by them.
As a developer, I've always used whiteboarding as a way to communicate ideas,
architecture, etc. There have been whiteboards in every office of every
company I've worked at.
I'd say somewhere around 75% of all interviews I've had for positions at other
companies have included whiteboarding.
------
jmull
I don’t see ageism in it.
If you’re, say, six years out of school you are still quite young but your
school skills that might give you an advantage at a whiteboard will be pretty
damn rusty.
Interviews in general are not really “fair” since the employer really has no
way to get out of a short meeting (or series of short meetings) what they
really want to know.
Given that it isn’t possible to have reasonable certainty and the massive cost
of being wrong, a strategy of erring on the side of rejecting good candidates
rather than accepting poor candidates isn’t unreasonable as long as you feel
your pool of candidates is rich enough.
------
xvedejas
I'm young and have also been turned down often due to whiteboard interviews.
What surprises me is that you were only rejected twice in a few months. I had
to go through many, many more rejections than that much quicker than that
before I found a good fit. Don't assume these interviews favor young people,
it certainly didn't feel like that to me. I think most places just want to
reject most people, because it's easier to justify rejecting someone good than
accepting someone bad.
------
blunte
I'm going through this right now, with 26 years of professional experience and
an undergrad degree in CompSci.
There are many problems with hiring. Many. And as with standardized testing in
public schools, one problem is how to judge the capability of many people
(applicants, in this case) in a reasonable time, ESPECIALLY when the judges
may not be experts in all relevant topics. The popular answer is to test
against standards that are easy to measure. In the end, the companies may end
up with fewer false positives (hiring an idiot), but will accept the
likelihood of having more false negatives (missing out on a good employee).
I don't agree with this approach, but mainly because of more a fundamental
reason. The #1 reason that most hiring is broken nowadays is that it focuses
on specific skills (often an unlikely/unrealistic laundry-list of
technologies) rather than focusing on the candidate's ability to learn, think,
and communicate.
Regarding filtering out older candidates, this is more a side effect of
testing on topics that would be more familiar to more recent grads. Likewise,
if interviews included demonstrating how a candidate would choose between C#,
Python, Ruby, Go, C++, Java, or Javascript for a given fantasy scenario, or
when to choose between bare metal servers onsite or virtualized hardware
onsite or managed or unmanaged cloud resources, younger and less experienced
candidates would fail more than older ones. They just don't have enough
experiences in different situations to know how to make these decisions (with
an reasoning to back them up).
The practical reality that I see is that older developers just may not be
worth the extra cost compared to younger, less experienced ones. This will be
especially true in many corporate environments where the more experienced,
more senior dev will have higher expectations, want more authority and
autonomy, and otherwise feel stymied by corporate drag. If the older worker
hasn't ascended into management, they have little road left to travel.
What many of us older devs didn't realize is the situation we would now be in.
At this point, we must forge our own paths - by starting a consulting
business, forming a startup, building a SaaS or other income-generating
product, etc. It does little good to be angry at the system (since our
complaints will absolutely not change anything).
------
tw1010
I buy this. But then again, why go through all the trouble? They can just
reject you once they see how old you are (and blame something else or not tell
you at all).
------
scarface74
I’m 45, had dozens of interviews in the last 10 years, only one or two
rejections and 5 jobs. I am a developer and haven’t had anything resembling a
white board interview.
For the job I have now, I was hired as a developer but never actually had any
development related questions besides asking about my previous projects and
architectural decisions. I also don’t live anywhere near Silicon Valley and
mostly worked at your standard Enterprise shops.
~~~
zerogvt
Can you please contribute the names of these companies to
[https://github.com/poteto/hiring-without-
whiteboards](https://github.com/poteto/hiring-without-whiteboards) You'd make
me and a lot other people a big big favor. Thanks a mil.
~~~
scarface74
I can tell you that it’s every company I’ve interviewed with in Atlanta for
the last 20 years.
------
II2II
Anyone can crash and burn in a situation like that, particularly since it
sounds like they are putting too much emphasis on the wrong things. I have had
a couple of interviewers go back to the question and ask how I would approach
it in a more realistic setting, meaning that they were more interest in
process than the solution. Unfortunately, that was the exception rather than
the rule.
------
jdauriemma
If we stipulate that whiteboard interviews might be useful for evaluating
candidates in some cases, a good whiteboard interview would have specific
criteria expressed in some sort of rubric or other document. That document
would be shared with the candidate and evaluator(s) in advance of the
interview. This ensures that the candidate and evaluators have a shared notion
of what constitutes success before the interview. An opaque process,
unfortunately, can cultivate the type of suspicion that you're expressing
here. It sounds like your whiteboard interviews did not give you a good sense
of the types of skills they were looking to evaluate during the interview.
I think it's great that you have such high self-esteem and aren't doubting
yourself just because some other people don't see your value. That said, I
think more data is needed before concluding that whiteboard interviews in
general are tantamount to an age filter.
Edit: please note that I have said "if we stipulate," not "if we accept as an
a priori truth" that whiteboard interviews are good. It's simply a rhetorical
device for evaluating whether or not your whiteboard interview is at risk for
evaluator bias.
~~~
Kapura
This is a ridiculous position to take. The amount of code that I have to write
on a whiteboard to successfully do my job as an engineer is shockingly close
to nil. The very notion of having a whiteboard section for a programming job
is insane.
You wouldn't expect a driver's test to include a significant oral portion,
where the drivers spend an hour describing how they would parallel park. A
whiteboard interview makes just as much sense, regardless if you're sharing a
rubric.
~~~
twblalock
Whether or not whiteboard interviews make sense for programmers is a different
question than whether or not they are intended as an age filter.
~~~
Kapura
If there is a portion of a programming interview that doesn't make sense for
programmers; but the results of this section skew the results of the interview
highly towards recent college graduates, what would you say is going on?
~~~
jdauriemma
I think you're addressing a symptom, not a problem. When any candidate
evaluation process is opaque, you invite bias into the process. Whether
whiteboard interviews are useful or not is subject to debate (it's not a
factual statement that whiteboard interviews are universally bad). Whether a
whiteboard interview is done fairly and without bias is a different
proposition altogether, and the interviewers can do a lot in order to address
this concern.
------
davidw
I'm a bit older than you, and am really wary and tired of lame interview stuff
too.
------
kleer001
IMHO two samples isn't quite enough to form useful data. It's a rough trend,
certainly, but more tests are needed to be sure.
Could be you dodged a bullet. Could be the interviewers were hungry. Could be
they didn't like your clothes.
42 is not that old.
~~~
mrhappyunhappy
Could be his attitude. The older we get, the more rigid we come across. I know
I personally put up with a lot less shit than I used to, hence why I doubt
many would want to work with me - and that’s fine by me.
------
mars4rp
This is a discrimination issue, and this thread is full of comments about
people that have not been discriminated against. good for you, but it doesn't
mean the problem doesn't exist.
------
JMTQp8lwXL
I think leetcoding has become the norm to increase the friction when
considering new job opportunities and keep wages lower. It incentivizes people
to cut down on job hopping.
~~~
esoterica
That makes zero sense. Companies may want to increase friction for people
leaving them but they want to DECREASE friction for people joining them. Using
white boarding as part of your recruiting process would increase friction in
the wrong direction.
~~~
JMTQp8lwXL
If everyone uses whiteboards and leetcode, existing employers win. Now, I'm
less incentivized to join $NEW_FANG because I have other obligations in life
and can't commit to that much leetcode. On second thought, it does lead into
the OP's point: it seems like a legal proxy for age discrimination. New grads,
which will work for less and offer no experience (beyond internships), will
have more aptitude for leetcode-style assessment.
~~~
esoterica
Read my comment again. White boarding increases friction in the wrong
direction. Companies make decisions to benefit themselves, not the industry as
a whole.
Also hiring people who are willing to work for less money is not age
discrimination.
------
rolltiide
Literally only college grads are marginally good at that. Nobody likes
whiteboard interviews and has to brush up. Everyone knows tech interviews are
inefficient.
2nd interview in a few months you were rejected? The only ageism here is that
you probably dont have time to interview at the number of companies that your
competition does. Last time I interviewed I did 16 interviews in one month at
probably a dozen companies, received 2 offers. There are some blogs posted
here where people talk about doing many many more than that.
I would say whiteboards suck but no not the ageism you are looking for.
------
netik
You’re giving zero details here aside from “i failed the whiteboard interview
and it was horrible”
What happened?
Maybe it’s you.
------
username90
Most juniors fails white-boarding as well, so we can assume that most seniors
would fail whiteboard interviews when they were young as well. So I think this
isn't ageism, just people with lots of experience who don't want to admit that
they weren't the top X% of their cohort. Of course it is easy to believe you
could have passed them back then due to the Dunning Kruger and the "Good Old
days" effect, but that doesn't mean that it is true.
If your argument is that you barely passed those interviews a decade ago and
it takes too much energy to practice again, then I'll say it is a feature. The
more we discourage less capable people from joining the better, don't you
agree?
Of course this all assumes that white-boarding is a good indicator, but the
companies administrating them has pretty good statistics that they work. Also
it doesn't have to be on a white board, at least Google lets you write on a
Chromebook since a few years back. The important part is that the candidate
has to solve and code a non-trivial problem not available in public (meaning
they can't have seen it before).
------
netik
You’re giving zero context here aside from “there was a whiteboard and it was
horrible.”
What happened?
Age has nothing to do with it - if you can’t communicate.
~~~
zerogvt
I've already said way more than I should considering I need to stay in this
business for a good few more years. Communication was fine if that's what
you're asking.
------
cr0sh
I'm a bit older than OP - just recently turned 46. I've been a software
engineer (professionally) since I was 18 years old when I was first hired by a
small mom-n-pop shop.
One would think I could simply plop my resume down, do an in-person interview,
show a bit of code I've written in the past, plus my github, and that'd be
enough. Alas, it isn't.
I've had good interviews that used a whiteboard, and bad ones that did.
Overall, though, I detest whiteboard "challenges" and I specifically avoid
them. Currently, with my use of recruiters, I tell them specifically not to
send me to such interviews if they use a whiteboard (it would have to be
something really unique for me to consider it - maybe for a ground-floor
startup opp, or something in robotics or AI, etc).
The best interview I had that used a whiteboard was basically where they asked
me to write fizzbuzz whiteboard style. Whenever I am given such a task (ie -
write code), I ask the interviewer if they mind if I use "pseudocode" \- just
to get the whole "wrong syntax or keyword" issue out of the way. I've never
had a turn-down from this ask. I liked that they wanted me to show I could do
fizzbuzz "from memory" because it would show I wasn't copying/pasting from
that github repo of "all versions of fizzbuzz", and it would also show I had
some idea about programming. After that, and the in-person portion of the
interview, they gave me a take-home challenge to write some piece of small
software (IIRC, it was a random-number dice game or something), and upload it
to their system for evaluation. I actually ended up getting an offer of a
position from that job, but I ended up taking a different position with
another company.
The worst whiteboard interview I had, though, felt like a complete grilling
session. It started off reasonable enough; ushered into a large conference
room, and I was questioned by a couple of programmers on their team, plus
their hiring manager. All seemed ok. They asked me to do some whiteboard work
- some SQL coding IIRC, among other things - all was going ok as I was writing
down an answer, but every time I turned around to the interviewers - there
were more people in the room. By the end of it all, it felt like the entire
staff of the company was in that room and nobody was out doing their job. Had
to be 20 or 30 people in there. To be honest, it rattled me - mainly because
it was so odd.
I didn't get an offer on that job - and to this day, I am glad I didn't. While
the company and the office location all had a "hip and upcoming" startup-like
vibe, plus open-office floor plan, etc - at the same time, I wondered why they
had such a large SWE team (10+ people from what I recall) for what was
essentially a basic PHP CRUD application.
It was interesting to note OP's idea of it being a potential age filter; I'm
not sure I agree with that fully. I wonder if it isn't meant as more of a
filter for those who didn't go to school to learn the craft. I mean, I've
probably done at least once certain things being asked for - but if they couch
them in terms that are "defined in the literature" (this also covers asking
about "patterns") - I'll most likely be lost. Because I don't know all of that
terminology, or what it applies to. I've been coding in some fashion or
another since I was 10 years old, but I haven't gone to school for it (aside
from a couple of community college classes for C/C++ - algorithms and/or
patterns were not discussed).
If so, it's kinda on them because they didn't read my resume carefully; I note
up-front that I don't have that education, that I am a self-learner, and that
I don't like to be a "specialist" in a particular language or framework-du-
jour. Rather, I'm a business problem solver - the ultimate choice of how that
problem is solved is an implementation detail that has only a certain bearing
on the problem solution. Mainly, it's better to come up with a solution and
then choose a language for the specifics of that solution that will work to
implement it. 9/10 times you don't need the latest language or framework for
most problems. It's more knowing when you do, rather than just picking one and
sticking with it forever (ie - I don't want to become someone mired in only
using and knowing COBOL). Likely, any problems that crop up in a solution tend
to do with how the solution was implemented, not what language/framework it
was implemented in.
Lastly - something I have noted post-interview is the fact that seemingly none
of the potential employers care to contact or do contact any references you
give them. You can ask them if they want/need references, but even those that
do, never follow up on them. I am not sure why this doesn't happen - maybe
it's just a simple constraint of time vs number of resumes/interviews? Or
maybe it's due to past candidates gaming the system, and making it an
unreliable metric to the process?
------
wellpast
This is an interesting take. I'm in my forties, will/hope to be a direct tech
contributor for my full career, and definitely make sure to prepare well for
whiteboard interviews. I find them fun to a degree, and I used to nail them in
my youth...and still do well...
Still, the ageism take is interesting but I don't like falling back to that
kind of thinking b/c it is counterproductive.
The evolution of my own performance in whiteboard interviews (and anecdotally
what I've heard from other experienced developers) is interesting:
I don't feel like it is directly due to age or cognitive change/decline, but
more to do with mastery/experience.
As you get more experienced in this craft (as with others), one of the most
important skills you learn is navigating _context_. After solving real world
problems repeatedly, you move further away from the aseptic environs of
academia/learning and find that prioritization and contextual understanding is
far more important to superior execution.
More explanation...
I was in an interview recently and got the typical line of puzzle coding
questions. These puzzle questions are completely denuded of context (which is
irritable to a professional practitioner). Or, rather, the context becomes not
to solve a problem with a given set of _outcome_ constraints (real world) ...
but solve a problem and try to guess what the interviewer's particular
fetishes are and try to hit those.
Do you get the impression the interviewer has a fetish for OO solutions? You
better angle on that or your going to get the ding. Do they want to see you
pull out a cool data structure like a min heap? You better realize that right
away and get there. Generally speaking, you can "win" in these types of
interviews if you angle for big-O optimality (I've found).
I've interviewed this way, myself. But over time -- with experience -- I've
found that this doesn't really even correlate that well with outcome. I've
interviewed and worked with people that absolutely nail these whiteboard
problems but when you get them on the job, and with real world curveballs
thrown at them, they freeze - either due to some issue of work ethic,
psychological issue/fear, or in the absence of _grades_ they just can't seem
to make a move.
What I've found is interviewing with just a Q & A style response and digging
into the work they've done and finding out how much ownership they have taken
in their past work, how much curiosity they show (for any given piece of tech,
just ask and see how well they know it and can even teach it to you if you
don't know it), how much drive they have to get things not just done, but get
them done well/solidly. Then, pretty quickly I can tell you if we've got a
good hire.
Conversely I've made hires where the candidate did not do so well in the
whiteboard but proceeded to excel in the professional context.
Think about this: if you have a person that is curious and drives to build the
thing well... imagine the day (and these days come but not so often) that they
encounter some need for a difficult algorithm. What is this conscientious
person going to do? They're not going wing it and write a bad solution. They
are going to go do their due diligence, their research, consult with their
team members, and they'll get the right solution. So, say they are not so
great with coming up with a novel algo on their own. One, that's a skill they
can develop, but two -- it's a skill that's only needed in a few people on
your team and then you'll overcome those steps at the intermittent times that
they come through.
Now of course this ^ way of interviewing requires a personal skillset in one's
self -- that is, you have to have a good work ethic, be intensely curious,
very responsible, etc in order to recognize these same qualities in the
candidate.
Unfortunately, I've found the industry dominates with "smart people" but who
tend toward what I call a 'philistinic' position. They lean way too much on
their natural intelligence and use that as an excuse to not really grow their
skillset. This is what some people call 'expert beginners' and there are far
more of these, ime, than truly skilled practitioners.
So -- back to the ageism thing. I think it's less ageism and more an overall
industry deficit. If we understood our work better and could produce more
masters/professionals then I'd think we'd see an increased valuing of
experience.
Blaming it on ageism I think only deters us from getting there.
------
dlphn___xyz
Why are you still applying to entry level dev roles at 42? Do you have a
portfolio?
~~~
tomalpha
I'm around this age and haven't applied for entry-level dev roles since I was
truly an entry level candidate.
I've endured whiteboard sessions for every role I've gone for.
~~~
dlphn___xyz
shouldn’t you have built up a body of work that conveys your level of
knowledge so you dont have to endure these assessments?
------
brador
If it was trivial you would have completed the exercises, no?
Truth is you're using age and "years coded" as a cover for competency. Hit the
books, refresh your knowledge stack. There's no shame in improving.
~~~
jerf
I'm 40. As it happens I like Haskell and I've spent some time with it, and I
expect I could get through most recursion interview tests, so this is not sour
grapes on my part, but: I really can see just how easily my career and hobbies
could have turned so I would be a good developer that should pass interviews,
but stumble and fall when trying to write a recursive version of some
algorithm.
(I'd add that in practice, almost nobody ever _directly_ writes a recursive
version of anything anymore. You should generally be using combinators like
map and filter and the crazier stuff Haskell provides rather than directly
writing recursion schemes by hand. It does happen in Haskell for a couple of
specific cases, but even the ones I can think of are exceptional cases for
dealing with certain optimizer corner cases, not places where it was
absolutely necessary to write direct recursive code. Even for a pure-FP job I
wouldn't hammer on it in an interview; I'd want to very rapidly move up the
stack to more interesting and relevant questions in our precious interview
time. I suppose it's an integration testing approach to interviews rather than
unit testing; I'd rather find something where recursion is used in passing to
ensure that you've got it than sit there and pound on that specifically. It's
not a good interview question unless that's all your interviewee can handle.)
------
27182818284
The whiteboard threads are always interesting to me because with more than ten
years experience in software development in enterprise and startups now, I've
never had to do a whiteboard interview.
Are they common outside the big six names? I.e., outside of Facebook, Google,
Apple, Microsoft, Amazon, and Netflix? I wonder what the actual percentage of
"Needed to pass a whiteboard-based Q&A to get a salaried position" is.
From my viewpoint, having never run in to them, they seem like something that
was much more popular in the past, but now rarely used or only used at the
aforementioned big six companies.
~~~
luc4sdreyer
In the 2019 Stack Overflow developer survey, 27.8% of respondents said they
encountered whiteboarding as part of their last successful interview process
that resulted in a job offer.
[https://insights.stackoverflow.com/survey/2019#work-_-
interv...](https://insights.stackoverflow.com/survey/2019#work-_-interview-
practices)
~~~
27182818284
Oh cool, thanks for the link! I should have thought to look at SO, as I
generally read those survey results, but it didn't occur to me to look for
this question there. 28% is a bit higher than I would have guessed even to
that answer.
|
{
"pile_set_name": "HackerNews"
}
|
Why do people still say Java is slow? - yati
http://programmers.stackexchange.com/questions/368/why-do-people-still-say-java-is-slow
======
srean
If you are doing a lot of processing on numeric arrays, then no, Java is
certainly not one of the faster languages. Its very difficult to have safety
(for example no out of bounds dereferencing of arrays) and performance. (An
extremely interesting language that tries to do that is ATS
<http://en.wikipedia.org/wiki/ATS_%28programming_language%29>)
Yes in theory Java will elliminate most of those checks, but difference
between theory and practice is more than what theory would suggest.
For simple loops for (i==; i < n; ++i) where n is a compile time constant (or
something that can be proved to be a constant) this would work. But one
encounters a lot of loops where n isnt a constant. This is just one example of
what slows Java down, there are others. It overspecifies semantics (think
order of evaluation of arguments), gaining safety but losing on performance.
That said I think the JVM is one of the best optimized runtime that we have.
Overall JVM is a great platform, but Java is not necessarily the best way to
exploit it. Coming back to Java performance, in my experience it would get to
around 80% of the speed of a C or C++ code but would use 3~4 times the memory.
No careful benchmarking this, just anecdotal experience.
JVM developers, in case you are reading this, please add tail call
ellimination and SIMD.
I am not a "3 cheers for dynamically typed languages" person, but for Java I
am willing to concede that the type system comes with the drawbacks but little
benefit. In C++, and D you can pull the gloves off when required. And please
dont get me started on JINI.
A problem is that Java based systems are slow both in the big (I am looking at
you Hadoop) and the small. You can run some of the C++ based mapreduce
implementations (<http://sector.sourceforge.net/>) and draw your own
conclusion.
~~~
camo
> I am not a "3 cheers for dynamically typed languages" person, but for Java I
> am willing to concede that the type system comes with the drawbacks but
> little benefit. In C++, and D you can pull the gloves off when required. And
> please dont get me started on JINI.
I expect you mean JNI and not Apache JINI...
That probably highlights the difference between Java and other languages you
like to mention, it may use more memory and be marginally slower
computationally, but when you take that the most interesting applications
these days are distributed the ability to sort an array and save a few clock
cycles fades into insignificance compared to network latency and connection
times.
Not to mention also, you can optimize your CPU's workload or you can optimize
your personal workload. Java lets you pick great libraries off the shelf. Or
you can choose to manually do your own array bounds checking and pick up your
own garbage.
------
dottrap
Because Java set itself up for failure. Java _can_ theoretically be faster
than optimized C code, but the problem is that people over-promised and/or
believe this would be the common case. Sun/Oracle failed to deliver a JIT that
outperformed optimized C code in the common case.
Java also messed up and was too preoccupied on getting faster throughput (in
too specialized circumstances) and didn't focus much on latency/user-
responsiveness.
End users don't care about theory. The only care about their individual user
experience. When they see slow launch times, long unresponsive pauses (due to
garbage collection), and similar apps written in native languages running
faster and being more responsive as a general trend, people are going to
rightly blame Java.
------
habosa
I don't think people who really know what they're talking about say that. Sure
the JVM is slow to start so there's latency involved in getting Java up and
going, but Java is very fast over the long term.
~~~
dlitz
... which is useless when you're trying to develop Java code.
Java also eats a lot of RAM in many applications (J2EE apps can easily eat a
few GB of RAM), so combined with the multi-second latency for startup, you're
basically confined to a single programming model: One giant, long-running,
multithreaded program.
~~~
democracy
Not necessarily. It "might" consume a lot of memory depending on your
particular application.
We recently went live with a suite of applications (5 apps all together, both
web and backend), processing tons of data in "real-time" in multiple threads
and after a few days of usage someone noticed it was running with default 512M
of RAM (8gb planned).
Considering the load it is processing - it is very impressive. And yes, it is
using all of J2EE in WebSphere with mq, hibernate, spring core, spring
integration, etc.
BUT I could not make JIRA work with 512 linode, had to upgrade to 1m and later
on switch to jira-cloud as it was failing all the time.
------
mtdewcmu
One of my theories on why Java feels so doggone slow on GUI apps is that it
refuses to use native platform APIs for anything. Everything must be
reimplemented in Java. So Java manages to feel more sluggish than python,
which has a slower interpreter, but isn't bound by doctrine to reimplement the
entire UI toolkit in pure python.
That, and I think Java has huge internal libraries, full of redundancy, that
get recompiled on every run, because nothing gets done without them.
The fundamental problem is that Java can't decide whether it wants to be a low
level or high level language. It tries to be both at once, and one casualty is
performance.
------
anip
Admittedly I don't have that much experience in Java but have worked on C++
systems with sub 10us latency (and it did fairly complex things). Any memory
allocation can take an order of magnitude more time than that.
Apart from writing your own memory arena, you would need to put local objects
on the heap in Java. It is possible to write high performance Java, but like a
previous comment mentioned, that code looks more and more like C++, so why not
just write C++?
------
voidlogic
Because that is what someone/blog/article told them one time (or less commonly
they remember 1998 Java performance). Java is one of the faster languages out
there and has been since 1.4+:
[http://benchmarksgame.alioth.debian.org/u64q/benchmark.php?t...](http://benchmarksgame.alioth.debian.org/u64q/benchmark.php?test=all&lang=all&data=u64q)
~~~
pubby
It's one of the faster languages... on benchmarks. Sure you can get Java to
perform close to C if its written like C, but if you've reached that point,
why not just write C code?
~~~
afsina
Dynamic memory allocation, pointers, lack of IDE's etc etc..
~~~
voidlogic
This. Java is almost as fast as C++ safer for the average bear, and IMHO
easier in the maintenance phase.
I make that last point because I think new people coming into large C++
projects often take longer to ramp up and understand the code than in large
Java code bases. But that is purely anecdotal.
------
MindTwister
I recently went back to an old Java app to benchmark it against some new code.
For small command line utilities the startup overhead can be massive compared
to the actual running time of the program, even if the <insert algorithm> runs
100ms faster in Java compared to C/C++/Go/<insert other> it won't matter if
the startup time takes 400ms longer.
------
discardaccount
Until Java can beat C/Fortran in FFTs and SVDs, it's too slow for me. Show me
FFTW in Java please. Where's the Java LAPACK?
~~~
oakwhiz
[https://sites.google.com/site/piotrwendykier/software/jtrans...](https://sites.google.com/site/piotrwendykier/software/jtransforms)
~~~
discardaccount
Thank you for the link, but the benchmarks at the bottom of that page show
that JTransforms is almost always slower than FFTW, and in only a few cases is
it marginally faster. Benchmarks are always questionable, and I really wish
they had compared single threaded performance, but I think this shows that
Java still has some catching up to do before anyone should be called foolish
for thinking C is faster.
------
marizmelo
The problem with Java isn't being slow. Look JavaScript is the fastest growing
language in usage lately and is much slower than Java. What is the problem
than? The community (sorry just my opinion). Where is the npm (maven? c'mon
something good please) for Java where I can find plug-in-play modules for my
applications? Where are the exciting web frameworks for Java like
Express/Rails/Laravel (ok we have play... but still). How many times I've
heard something exciting about Java here on HN? 1...2... never mind.
Stop being a secret society (sorry again) type of community and start putting
more projects on Github (that are easy to use and get started with), writing
more tutorials on visible places, being more active.
I am sure I will hurt some feelings here. Get easy on the comments ;)
~~~
democracy
:) as far as I remember, Java was (is?) the most popular language on SF.
Java devs are not too vocal on HN lists, but this is the nature of HN, which
is not a fair reflection of the state of things in the industry....
------
IbJacked
Seems to me that the only time I hear that Java is slow anymore is when I read
someone asking "Why do people still say Java is slow," and the like.
Do people still say Java is slow?
------
democracy
There is no such thing as "slow" or "fast". It is all about the satisfaction
of requirements.
------
InclinedPlane
Because it is? There are some circumstances where the slow-down is minimal or
at least inconsequential, but not always. JIT-delay is a real thing, and even
though in some applications the impact can be minimized (e.g. services) in
others it's not. There's a reason why it's still very rare for games to be
written in java, for example, and it's not because of blind prejudice.
~~~
spullara
Aside from one of the most popular... Minecraft (played for days and never saw
a GC pause). I think it actually is because of blind prejudice OR POSSIBLY
expressiveness rather than "slowness" since most of the scripting languages
within the game platforms also make use of GC systems that are much worse than
the JVMs.
~~~
Quequau
I'm one of those folks who suspects that projects which were written in Java
might be slower that they otherwise would have been. I also bought Minecraft
some time ago (back when it was roughly €7).
Having used Minecraft for a pretty long time, I don't feel like it's the sort
of thing someone should hold up as exemplary of how fast Java is / can be.
Instead, for me at least, it's more evidence of Java apps having troubles
making use of available system resources (particularly many cores and/or
larger amounts of RAM) as well as demonstrating that portability still can be
a problem with Java apps (as in OpenGL v OpenGL ES; MCPC v. MCPE, MCPI &
MC360; and even windows v Linux & Mac).
I get that it is possible to write Java such that certain things run pretty
fast. I also don't think that these sorts of metrics should be used as the
only metric every developer uses to decide what language to use for their
projects. Therefore, I don't feel that these are reasons to conclude that
someone else should or shouldn't use java for their project (as surely they
have their own priorities which they base their own decisions on).
On the other-hand I don't think that any software programming language having
advanced features which allow for efficient use of these various other system
resources matters much, if many (most?, most popular?, many popular?) projects
don't use them (or perhaps don't use them correctly). For that matter the same
goes with hardware... it hardly matters if some chip has amazing super powers,
if few projects use them.
Back to why people think Java app can be slower than they ought to be. My
experience has been that when Java first began to become popular, it became
the language of choice in many universities. Not long after that it was common
to find poorly designed & implemented projects to be written in Java. While
this has changed by now, nearly all of the Java projects, that I have
experience with don't have an obvious priority of maximising performance by
using all available system resources as efficiently as they can be.
------
od2m
Eclipse -- The titanic of IDEs.
|
{
"pile_set_name": "HackerNews"
}
|
Y Combinator Article Nominated for Deletion by Wikipedia Administrators - ciscoriordan
http://en.wikipedia.org/w/index.php?title=Y_Combinator&oldid=234584904
======
pb30
Rather than just complaining about Wikipedia, I've contested the deletion by
removing the prod tag and added some sources from major publications. Please
help add references from reliable sources (blogs dont count) or help copyedit
the article.
Here's a good search for finding reliable sources:
[http://news.google.com/archivesearch?q=%22y+combinator%22+so...](http://news.google.com/archivesearch?q=%22y+combinator%22+source%3A%22-newswire%22+source%3A%22-wire%22+source%3A%22-presswire%22+source%3A%22-PR%22+source%3A%22-press%22+source%3A%22-release%22&btnG=Search+Archives)
~~~
rokhayakebe
Rather than just complaining about Wikipedia, I would love to see 2 smart guys
in a garage create a competitor.
~~~
hugh
It's trivially easy for two smart guys in a garage to create a competitor.
It's difficult to persuade hundreds of thousands of people outside said garage
to write the actual content.
~~~
echair
I think most people would understand "create a competitor" to entail more than
the strawman of mere implementation you seem to be taking it to mean.
~~~
hugh
The point is that the main problem can't be solved at the two-guys-in-a-garage
level.
Two guys in a garage can create a wiki. Two guys in a garage with a million
dollar PR budget can create a wiki and get it widely publicised. Two guys in a
garage with a fifty million dollar budget can hire a bunch of writers to get
the content kickstarted. But ultimately, the problem of persuading thousands
of people to contribute to a brand new service is not a problem that anyone
knows how to solve -- it either takes off or it doesn't.
~~~
rsheridan6
The two guys have an advantage - lots of former editors are sick of Wikipedia,
and non-deletionists have nowhere to go. I don't see any reason why two guys
in a garage couldn't succeed.
------
Mystalic
Before everyone goes nuts, please consider the following: \- Wikipedia allows
for a civil debate on deletion matters. That's why the talk page is there. \-
Back up your arguments with logic and facts instead of floods of "YOU ARE
WRONG" - that will get you nowhere. \- Don't flame anyone for their opinions.
\- Most of all, let's defend the notability of Ycombinator. As a tech
entrepreneur and professional blogger, I believe that A) YCombinator is very
notable for not only who it invests in, but it's unique style and that B)
People benefit from that information. So I will argue with logic, facts, and
courtesy. I hope you all do the same as well.
~~~
iamdave
How about instead of running in blazing with integrity, already prepared to
disagree with someone we "please expand or rewrite the article to establish
its notability." and bring it to Wikipedia standards?
~~~
Mystalic
That's a given. The article isn't up to wikipedia standards at the moment.
------
snorkel
Since we're refraining from flaming the wiki wackos on wikipedia let's do it
here instead: Get a frigging grip on reality. That page is not that bad. It is
useful information. Deleting should be reserved for obvious spam or completely
irrelevent or wrong information. But that's fine. The wikieaucracy is
gradually destroying wikipedia paving the way for something better to take its
place.
~~~
ciscoriordan
There's something seriously wrong when the article about Disqus is deleted for
not having reliable sources, and a search on Google News for "Disqus" shows
articles from the Washington Post, CNET News, Mashable, and VentureBeat, all
on the first page.
~~~
wmf
Were those sources actually linked from the Disqus article?
~~~
pb30
No, the old article was pretty weak, just a couple of lines and a link to the
TechCrunch launch post.
------
jbyers
I don't mind Wikipedia's standards, but I wish they were more evenly applied.
There are hundreds, maybe thousands of companies that are less notable than YC
that will never be deleted. Instead it is those that are somehow controversial
and questionably notable that will be flagged for deletion.
~~~
mikeryan
YC isn't "Not Notable" they're saying the article doesn't have the sufficient
supporting evidence of its notability. (Which it doesn't)
This is an easy fix.
~~~
tptacek
It _is_ kind of annoying that an admin prodded a page that has mainstream
print pubs on the first page of news.google.com results.
~~~
silentbicycle
It looks like said mainstream print pubs weren't referenced in the deleted
version, though. (Several are now.)
Wikipedia generally doesn't use <http://justfuckinggoogleit.com/> as a primary
source.
~~~
tptacek
No, WP doesn't use Google as a primary source. Neither did I. The first page
of Google results includes sources in mainstream print publications, many of
which have YC as their actual subject.
If you have a concern about sourcing, there's the "refimprove" tag. An
uncontested "prod" deletes the article. It _is_ bad form to prod things you
didn't even take the time to look up. If you want evidence to that affect, try
slapping an AfD on the article and see how long it takes to speedy out.
Then, let me know you did, so when your RfA comes up, I can cite the AfD.
RfA's have failed over silly stuff like this.
~~~
silentbicycle
(Incidentally, sorry if that came across as more hostile than intended.
Rereading it, it was more curt than I realized.)
------
radley
Ironic that they'll remove actual people and companies, yet they retain 100s
of pages covering the Star Trek universe:
<http://en.wikipedia.org/wiki/United_Federation_of_Planets>
~~~
hugh
It's true that people tend to be more aggressive about deleting pages about
people and companies than they are about deleting random Star Trek crap, but
it kinda makes sense. Most of the truly rubbish pages which show up on
wikipedia are random schmoes creating their own wikipedia pages to promote
themselves, their businesses, their bands or their blogs, so the notability
criteria for people, businesses, bands and blogs are pretty firmly enforced.
At least there's only a finite supply of random Star Trek crap to be
incorporated.
------
smakz
I also don't see what the big deal is. Having a Wikipedia article does not all
of a sudden validate the Y Combinator idea, and having it deleted certainly
does not invalidate the work they've done.
To put it in perspective, ignition partners, one of the largest north western
venture capital funds, does not have a wikipedia page.
Take a look at the articles on VC firms on sand hill:
<http://en.wikipedia.org/wiki/Sand_Hill_Road>
Only KPCB has an informative, encyclopedic entry - the rest I would argue
don't even need to have articles.
Not to mention YC isn't a big VC firm, it's seed-only.
------
tptacek
This again?
Read the comments here: <http://news.ycombinator.com/item?id=216723>.
YC has a huge amount of media coverage. There is no way it is going to be
deleted. The sole standard for an article remaining in Wikipedia is
Notability, which is determined entirely by the presence of reliable
independent sources.
Anybody can nominate an article for deletion at any time. You could nominate
[[Bill Gates]] right now. It would appear, briefly, in the AfD debate log,
until someone speedy-kept it. This will get speedied too. Move along, nothing
to see here.
------
biohacker42
A Wikipedia competitor is one of the things pg would love to fund so....
------
ph0rque
why are there only 16 yc companies listed?
~~~
pg
For a while Wikipedia had such a complete list that we ourselves used to refer
to it. Then some wikipedian decided to "improve" the list by deleting most of
them. Since then it has always been a more or less random subset of YC funded
cos.
~~~
hhm
I guess there should be a separate page listing all YC funded companies (if
you can have pages listing all characters in tv series, why not companies
funded by YC?).
~~~
Goladus
_why not companies funded by YC?_
Because the standard of 'notability' is bogus when applied to something of the
supposed scope of wikipedia. It's impossible to apply it with any reasonable
consistency, and it always boils down to a few biased opinions, which is why
the topic of notability deletions is so sensitive.
------
hooande
Does traffic play into this at all? Something tells me more people go to that
page than to the long tail majority of wikipedia pages.
------
known
Y Combinator Alexa Rank is 205,428
[http://www.alexa.com/data/details/traffic_details?url=http%3...](http://www.alexa.com/data/details/traffic_details?url=http%3A%2F%2Fnews.ycombinator.com%2Fitem%3Fid%3D288200)
~~~
Jem
You are aware that Alexa Rank is both a) virtually useless and b) easily
fixed, yes?
~~~
known
YC ranks 72,800 in <http://www.sitereportcard.com/index.php>
------
ckinnan
It's the Internet! Why delete any articles, ever!? It's not like Wikipedia is
running out of database space or something. It's dumb to have a subjective
"notability" standard at all in a world of practically infinite scale.
~~~
gnaritas
It's called signal to noise. All content is not good content and serves to
make the good content hard to find. Deletion is necessary.
------
ciscoriordan
It's things like this that make me lose faith in Wikipedia.
~~~
tstegart
I don't know. When you read the definition of what is notable, you can argue
someone has a point. Useful? yes, Interesting? of course, Popular? definitely.
But you have to dig deeper to decide if YC is notable.
After some thought, I think it is. One can say YC is just another VC firm, so
why should that be notable. But the unique way in which they are investing and
developing companies is notable.
~~~
parenthesis
The fact that several imitators of YC have since popped-up seems to me to make
the original notable.
------
Mystalic
Well that was a rather quick resolution to the problem.
~~~
ciscoriordan
The one-time problem of this specific article being nominated for deletion has
been resolved, but the larger issue of the Wikipedia bureaucracy preventing
good content from being created and viewed is certainly still around.
------
psyklic
... and the article is still pretty bad! apparently these comment threads
don't inspire much action ;-)
------
mroman
One answer to their knack for deleting valid content is to simply NEVER donate
to them, encourage every single person you know to do the same, and let
wikipedia know you are doing this and why.
This is what I have done.
|
{
"pile_set_name": "HackerNews"
}
|
Ask HN: Is there a news site where you can define sources it must pull from? - thasaleni
I don't need something like Google news, that suggests news based on some algorithm, I need something i can manually add sources to and then it consolidates and classifies all the news from those sources. e.g. I add Bloomberg, Reuters, CNN, and it can pull news from all those sources and show them in one website with different classifications like Finance, Politics, Technology etc
======
thasaleni
The reason I asked for this is, I have a couple of websites that I read at
least everyday, to keep up with current news on different categories, combined
they are all north of 15 I sometimes even forget which is which, I keep open
tabs at work currently to keep them up, I would like to have one website that
has nothing by default and I add URLS to my news sources and it will show me
news from these sources all day, some of this sources are just websites with
no RSS feeds :( so something that works magically would be good, RSS being the
fallback if that can't happen
------
nefitty
What about an RSS reader like [http://feedly.com](http://feedly.com)?
~~~
thasaleni
OK, thank you I looked at this and this is exactly what I was looking for, and
more
------
iterrogo
I run [https://kabonky.com](https://kabonky.com) which does exactly that. It's
more of a side project but if you find it useful and want a source it doesn't
have just let me know and it's likely I'll be able to add it.
~~~
thasaleni
This is close to what I want, it would be nice if I could add any source, by
URL, even if it means I have to go to the origin site and look for the RSS
endpoint myself
------
frou_dh
Dave Winer's "River of News" concept/softwares are probably a good starting
point
------
runjake
This is what an RSS reader does.
------
applecrazy
What about Apple News? It can do all this with the iOS 19 update.
~~~
thasaleni
I need something on the web, Also I don't own any kind of IOS device
|
{
"pile_set_name": "HackerNews"
}
|
Creating consistent development environments with Docker - sdomino
https://hackernoon.com/how-to-create-consistent-development-environments-that-just-work-55be5417341b
======
seoknucklehead
What percentage of time do developers spend troubleshooting code on different
environments and trying to synchronize, probably 25-30% anyhow?
~~~
technologyvault
Sometimes more than that. Depends upon the size of the team and how well
organized they are.
|
{
"pile_set_name": "HackerNews"
}
|
Show HN: Vuesence Book – Vue.js component for building documentation systems - _altrus
https://github.com/altrusl/vuesence-book
======
mahesh_rm
This looks neat! Will try it out soon.
|
{
"pile_set_name": "HackerNews"
}
|
I Want a New Platform - dstowell
http://www.unionsquareventures.com/2007/09/i_want_a_new_pl.html
======
dpapathanasiou
Ugh... I want to downvote this just for putting that Huey Lewis song in my
head
|
{
"pile_set_name": "HackerNews"
}
|
Reverse-engineering Instagram to access the private API - 1il7890
http://definedcodehosting.com/reverse.html
======
sjtgraham
Is it still the case that the app only uses HTTPS to create a session and
plain HTTP for everything else? I remember that was the case about a year ago
after using mitmproxy to sniff traffic, although I don't recall HMAC being
used to sign requests then. Anyway, I wondered then why nobody had used
firesheep to devastating effect, e.g. a bot sitting on an open wifi and
posting NSFW images to any account on the network.
~~~
xuki
I'm fairly certain they use HTTPS for all the requests involve user token.
------
potomak
Note that Instagram doesn't encrypt requests to their private API, they're
only signing them. In fact the parameter is called 'signed_body', not
'encrypted_body'.
Anyway interesting post.
~~~
CGamesPlay
The traffic is encrypted over SSL (HTTPS), so there's no need for double-
encryption, since presumably the app can trust the DNS and certificate chains
on the device. If I were a paranoid Instagram developer, though, I could even
use a custom certificate chain that only trusted certificates on my server, so
there's no point to doing the encryption in the app.
------
rnaud
Isn't the fact that they are using a simple HMAC-SHA256 hash also a root of
the problem?
If instead of using the POST data only to create the hash they added another
information, like a the hour of the day. Wouldn't it be way harder for a
hacker to actually understand what went into signing the request?
~~~
meritt
Not really. He decompiled the code so it's pretty simple to figure out
regardless.
------
SifJar
Interesting write up. Seems rather simple, really. Presumably Instagram could
change their private key and rollout a new client version on each platform,
breaking all third party apps using the current key though. Although I guess
it'd be just as easy to get it again.
~~~
Zariel
Well if you have the private key on the device you can always pull it out, so
really they should have used public key encryption if they didn't want people
to forge the signatures.
~~~
CGamesPlay
Public key encryption to sign involves encrypting using my own private key.
Which would be on the phone, and so any end user could pull it from the
device.
~~~
cake
So what can you do ? You can't fight the private key retrieval right ?
~~~
CGamesPlay
Well, you can lock down the device so that the end user doesn't own it,
doesn't have root, and can't inspect your binary/data outside of your exposed
interface. Which is exactly what iPhones and many Android vendors attempt to
do.
~~~
somesay
Typical DRM/crypto problem. As soon as one party is out of your control (e.g.
client on end-user device) you already lost. Even on the iPhone you could do a
jailbreak. You could only make things more difficult, in worst case you would
have to open the device and directly access the RAM or something.
Indeed you could personalizes the keys for every user. So you could detect a
leaked private key that is widely used and proceed against. Still that
wouldn't hinder personal further use of that private key, e.g. for exporting
data (similar to breaking DRM).
|
{
"pile_set_name": "HackerNews"
}
|
Torrent downloader Bitport.io premium for free - MicarleNostril
https://bitport.io/holiday-giveaway/
======
MicarleNostril
Follow the link. After you register, you get your premium plan automatically.
It is valid until the end of this year. Enjoy.
|
{
"pile_set_name": "HackerNews"
}
|
How Genius annotations undermined web security - LukeB_UK
http://www.theverge.com/2016/5/25/11505454/news-genius-annotate-the-web-content-security-policy-vulnerability
======
kaonashi
Isn't this essentially the same thing as hoodwink.d?
|
{
"pile_set_name": "HackerNews"
}
|
Berkshire Hathaway Shareholder Letters - merrick33
http://www.berkshirehathaway.com/letters/letters.html
======
peternicholls
Love these letters!!
Priceless to anyone who is running a proper business!
|
{
"pile_set_name": "HackerNews"
}
|
12 Outdated Web Features That Need to Disappear in 2014 - octavianc
http://mashable.com/2014/01/14/outdated-web-features/#:eyJzIjoiZiIsImkiOiJfdnJxbWVjbTFmeTg0bmV6ayJ9
======
wanda
#13: articles about _n_ things that _x_
|
{
"pile_set_name": "HackerNews"
}
|
WPCouple Interviewed the React.js Team at Facebook About WordPress and Gutenberg - agbonghama
https://wpcouple.com/interview-react-team-facebook-wordpress-gutenberg/
======
mrahmadawais
Thanks for the submission.
|
{
"pile_set_name": "HackerNews"
}
|
Tumblr will be joining Yahoo - twapi
http://staff.tumblr.com/post/50902268806/news
======
benackles
How many different ways have we heard the same thing?
> So what’s new? Simply, Tumblr gets better faster. The work ahead of us
> remains the same – and we still have a long way to go! – but with more
> resources to draw from.
Wow, someday someone's going to get caught plagiarizing. An acquisition rarely
leads to "more resources to draw from" and more often leads to neglect. Yahoo
clearly has a lot riding on this deal and will surely make a strong effort to
disprove the doubters on this one. However, unless they're banking the company
on Tumblr (doubtful), then it's hard to imagine more resources going into
Tumblr.
The only time a large company's resources bring value is when the acquiring
company has legal and lobbying resources a small upstart could never afford.
YouTube and PayPal are probably the best examples of this.
~~~
podperson
YouTube probably benefited from the deep pockets of Google since it had no
monetization strategy and used crazy amounts of bandwidth and storage
(especially for the time).
~~~
benackles
Bandwidth and storage costs were surely a growth pain. However, the thing that
was inevitably going to sink them was the $1 billion lawsuit they were facing
from Viacom.
------
tehwebguy
Signed off in true tumblr form:
Fuck yeah,
David
~~~
supercoder
Yeah maybe I'm just cynical.. but something about that just seems a little
forced, like 'look we're definitely still cool, see i swore!'.
Also comes off like he's slightly resisting the merge with Yahoo, which seems
a little strange given they've just dropped 1 billion on them.
~~~
tehwebguy
Totally possible. I was referring to how fuckyeahX.tumblr.com is a popular
formula for Tumblr blogs that focus on X.
------
artursapek
Why is it so common to refer to Yahoo! in the first person as "Marissa?"
I know she's the CEO, but I never hear anyone call Microsoft "Steve," Google
"Sergey and Larry," AirBnB "Joe," Apple "Tim," etc. Why is Marissa such a big
deal?
~~~
mark_l_watson
I think that her personal brand is a plus for Yahoo, so I doubt that she or
Yahoo! minds this.
------
stevewilhelm
> We’re not turning purple. Our headquarters isn’t moving. Our team isn’t
> changing. Our roadmap isn’t changing.
Spending 1.1 billion on a company that says it's "not turning purple" may have
some impact on morale in Sunnyvale and on Wall St.
------
laterzgatorz
Could someone explain something to me? When yahoo pays the 1.1B, who does that
money go to? Does it go to the original investors apportioned out based on how
much equity they have?
~~~
pbiggar
It goes to the stock holders. For a company at Tumblr's stage, probably about
55% of stock will be owned by VCs/angels/other investors, 20% by employees,
and maybe 25% by founders.
It will also include signing bonuses and golden handcuffs for employees,
though that will be tiny relative to 1.1B, so it probably doesn't change the
numbers much.
------
yvoschaap2
obviously worth a 'congrats' for a 7 year old company which build a pretty
great community...
Does an "all cash" deal imply something about Yahoo stock valued as bearish?
~~~
loceng
2 views \- Yahoo! values their stock more than their cash \- Tumblr and/or BOD
doesn't value Yahoo! stock as much as cash
|
{
"pile_set_name": "HackerNews"
}
|
Ask HN: Would you use an email presorting service like this? - branko
Hi guys, at SquareOne we've been building a more manageable mobile email experience, and through all of our customer research we came up with an interesting service model that could live on top of the platform we've built.<p>This is a service for people who get 100+ emails a day and can't effectively stay on top of their inbox without spending hours a day inside it. We let the person with the highest incentive to have an email reach you - the sender - do a little bit of extra work and categorize their email for you. Concept landing page:<p>www.squareonemail.com/mailone<p>What do you think?
======
stevekemp
Years ago I sent a message to a stranger. I received an auto-reply saying "I
got your mail, spam is hard, click this link to deliver it and prove you're
human".
For a second I thought it was a cute solution to spam, then I realized we'd be
in a world of pain if we mailed each other, and both of us replied in such a
fashion.
I never mailed him again.
If I have to do work to contact somebody? Well I'll just ignore that person.
Sorry.
~~~
whichdan
[http://en.wikipedia.org/wiki/Challenge%E2%80%93response_spam...](http://en.wikipedia.org/wiki/Challenge%E2%80%93response_spam_filtering)
Further reading if anyone is interested in challenge-response systems.
------
whichdan
So is SquareOne supposed to replace my email, or work alongside it?
~~~
branko
SquareOne (squareonemail.com) is a standalone native iOS email client that
works with your Gmail account. We're in public beta with the app now.
The experiment we posted about (we're calling it MailOne for now) would simply
supplement the new email UX by an added service layer, where the sender of an
email is challenged to categorize it for you as the recipient.
|
{
"pile_set_name": "HackerNews"
}
|
Ask HN: What hipster skills or project are you working on due to plague lockdown - awaythrower
I'm making spanish rice, falafel, and hummus from absolute scratch.<p>Is someone out there growing wheat and making naan or tortillas?
======
awaythrower
I just learned cooking white and brown rice together doesn't work because the
white rice will keep absorbing liquid until it's saturated, leaving
hard/uncooked brown rice. Whoops. Haha.
~~~
Minor49er
You might have better luck with brown and wild rice
|
{
"pile_set_name": "HackerNews"
}
|
Bike light that projects the symbol of a bike down onto the road - dabeeeenster
http://www.kickstarter.com/projects/embrooke/blaze-bike-light
======
jdietrich
This is stupid.
The £50 that this light costs will buy you an _extremely_ bright set of
conventional lights, giving out several hundred lumens rather than 80. These
lights will illuminate a large area of the ground ahead of you, allowing you
to see, and provide dazzlingly bright points of light several feet off the
ground, allowing you to be seen. These lights focus their beam where it is
needed, projecting light directly towards the eyes of motorists, rather than
bouncing it off the ground for no clear reason. Spend a bit more and you can
buy a Magicshine set, which is on a par with motorcycle lighting.
Drivers are not looking at the ground. They're not looking out for weird green
symbols. They're looking for red or white points of light, a few feet off the
ground. Cars and trucks have large blind spots on the nearside of their
vehicle, which a cyclist can best avoid by using their height - a cyclist is
as tall as a large SUV, providing good opportunities to mount your lights
above the doorline of most cars.
If you're worried about drivers turning across your path, your road position
is wrong. You can buy some silly gadget that won't really help the situation,
or you can get some training to give you the confidence to get out of the
gutter.
~~~
jawr
I very much agree with your last statement; if you are undertaking a bus or
other large vehicle you probably shouldn't be cycling at all. Of course there
are exceptions to this.
Although lighting is a very important factor of being safe when cycling on the
road, being aware of traffic and the law of the road is far more beneficial. I
have cycled around London for years, usually with no lights at all and perhaps
it is luck, but I have never had an accident.
You need a strong position in the road and decisive action that allows other
road users to see your intentions - just act as if you were a car.
~~~
dabeeeenster
If you are cycling without lights, I would say you shouldn't be cycling at
all.
~~~
jawr
This is true especially in the autumn/winter. Unfortunately my self confidence
on the road usually get's the best of me.
As much as I hate to say it, maybe there should be some sort of
guidelines/laws to cycling on the road; I know there are plenty of drivers
that would want this and I have seen some awful cyclists on the road who are
not only a danger to themselves, but also to others.
~~~
mootothemax
_maybe there should be some sort of guidelines/laws to cycling on the road_
Such as the highway code?
[https://www.gov.uk/rules-for-
cyclists-59-to-82/overview-59-t...](https://www.gov.uk/rules-for-
cyclists-59-to-82/overview-59-to-71)
------
AdamGorman
Already launched similiar product.(this one goes backwards, instead of forward
though) <http://www.thisiswhyimbroke.com/bike-lane-light>
Combine for red/green Christmas holiday theme goodness?
~~~
debacle
The product you linked looks much more functional than the one in this
kickstarter. Do you own this?
~~~
ljf
I have one, and it's NOTHING like the image - if it hadn't have been a gift I
would have returned it.
I'll give you that it looks cool and other cyclists see it and like it, but as
soon as a car with bright headlights comes anywhere near you it disappears...
It has no practical use in helping keep cars at bay.
I'd also agree with some comments above that drivers are looking a few feet
up, not at the road.
I find 3 LED lights the best for the rear - one at bottom of seatpost, one on
my back and one on my helmet - can easily be seen from all angles - where as a
single seat mounted can be hidden for a range of reasons.
From the front a Cree T6 from ebay is stupidly bright - keep it aimed at the
road. Great for off roading and city riding.
------
dlib
I live in Holland and the dedicated bike paths here make cycling really safe.
So safe that I don't see the use for this bike light here in the Netherlands
although it will probably have a marginal benefit. However, last summer I
rented a bike in London and it was truly terrifying. Never before have I felt
so unsafe in traffic before, anything to improve this situation would be
great. Nonetheless, proper training of drivers so they are more aware of
cyclists and dedicated bike paths would help immensely.
~~~
alexkus
In London we simply don't have space to put in dedicated cycle paths
everywhere. IMHO, they're treating the symptom anyway, the problem is that a
large number of road users in the UK (including quite a few cyclists) are
aggressive, discourteous or just plain bad drivers/riders.
Cycle facilities (cycle lanes - dedicated or not, off road cycle paths,
advanced stop line boxes, etc) all help encourage people to cycle; this is
great as the more people on bikes on the road the safer it is for everyone
(see TFL's "Safety In Numbers" campaigns). But a 2mm high strip of white paint
on the road isn't going to save me from an idiot that is texting whilst
driving and not looking where they're going. Ideally we wouldn't need any on-
road cycling provisions at all.
I've ridden a bike quite a bit in France and have had no problem with the car
drivers there, they give plenty of space when overtaking and have no problems
being temporarily held up behind a cyclist until it is safe to overtake. No
special facilities, just courteous drivers.
Segregation works both ways. This is anecdotal, but on one long distance ride
in the UK there were sections where the riders were overtaken by the support
vehicles of other riders (mainly the european riders). By far the worst (in
terms of least distance given when passing) for overtaking were the cars with
"NL" markings on their european number plates. My guess is that the years of
segregation have put them out of practice with sharing the roads with cycles.
Again, that's just my experience.
FWIW I cycle commute almost daily in London (I took the train today as I've
got a horrible cold) and have grown used to London traffic. The closest times
I've come to an accident were all my own fault. I've read "Roadcraft", I've
had a full driving license (both car and motorcycle) for 15+ years, I did do
cycling proficiency at school, yet I'll still take the opportunity for more
training if it comes up (the local London boroughs to me often provide free
cycle training every so often). I don't undertake vehicles approaching left
turns, or overtake vehicles approaching right turns. I avoid cycling in the
'door zone' and I'm especially careful passing stopped cabs as the doors are
invariably opened by passengers without looking. I'll be assertive but not
aggressive. I smile at the lemming pedestrians that walk a step or two out
into the road before looking. etc.
~~~
gbog
> In London we simply don't have space
That's what they all say. I am sure some said the same in Paris before the
recently build cycle lanes. Cars and other stuff can give more space to other
means of transportation. If needed, just crush some old constructions, it will
give work to those who need it and it will give an axe to grind to those who
need it.
~~~
carlob
Though Paris has surprisingly large avenues for an European city because of
Haussman's renovations.
------
PeterisP
I'm not entirely sure how that would help - it projects a bike symbol down to
the ground.
When I'm driving, if such a cyclist would be in my blind spot (behind me, in a
lane to my right), I wouldn't see the projected symbol as well. It would be
simply obscured by my car - I have no way of seeing the asphalt so close to
me; if a pedestrian would be standing on that illuminated spot, I'd see the
person, but not their feet.
That's assuming the suggested 4-6m distance. 10-15m would be different, but
that makes much higher requirements on the power of that light.
------
RobAley
Its a shame, for a UK product, they didn't consider that it will be illegal to
use in the UK[1] as your main bike light.
It has a steady mode, and such needs to conform to BS 6102/3, which it won't,
so it means it can't be used as the main light. As an additional light it
doesn't need to be BS 6102/3 stamped, but does need to be white (not green)
and the flashing mode needs to be between 60 and 240 flashes per minute (which
it may do, but doesn't say).
When I'm driving, a green picture of a bike going down a road might distract
me from the actual bike, especially when its projected 4-6m ahead of it as
they plan. In inner-city cycling as a cyclist, you rarely need the light to
see where you're going, more to allow other traffic to see you.
[1]
[http://en.wikipedia.org/wiki/Bicycle_lighting#Legal_requirem...](http://en.wikipedia.org/wiki/Bicycle_lighting#Legal_requirements)
~~~
TeMPOraL
> and the flashing mode needs to be between 60 and 240 flashes per minute
> (which it may do, but doesn't say).
There's a thing I never understood about bike lights - why people use flashing
lights? They are distracting like hell, and could probably cause some serious
discomfort for people with photosensitive epilepsy.
EDIT:
[http://speakingupanyway.wordpress.com/2012/01/07/flashing-
li...](http://speakingupanyway.wordpress.com/2012/01/07/flashing-lights-
trigger-seizures-so-dont-use-them/)
~~~
jdietrich
Flashing lights are much more visible, for a variety of reasons. The most
obvious is that you can drive the LED much harder and run the light for longer
if it's flashing. Peripheral vision is much more sensitive to flashing lights
than solid lights, because we're evolved to be more sensitive to fast-moving
objects.
Movement is another key reason for flashing lights - motorists have a great
deal of difficulty in accurately judging the speed of cyclists, who are
travelling more slowly than them and at a much broader range of speeds.
Flashing lights make it much easier to judge speed and distance, which is why
the FIA require flashing rear lights to be used on the rear of Formula 1
racing cars in wet weather conditions.
As another commenter stated, bicycle lights flash at the wrong frequency to
trigger photosensitive epilepsy.
------
drunken_thor
I like this idea better and I should be getting mine in the mail in the next
month or so [http://www.kickstarter.com/projects/1652790707/torch-
bicycle...](http://www.kickstarter.com/projects/1652790707/torch-bicycle-
helmet-with-integrated-lights?ref=live)
------
chopsueyar
This is not original. This is old...
[http://tech.slashdot.org/story/09/07/01/2255234/bike-
project...](http://tech.slashdot.org/story/09/07/01/2255234/bike-projector-
makes-lane-for-rider)
From 2009:
[http://www.artic.edu/aic/collections/exhibitions/Hyperlinks/...](http://www.artic.edu/aic/collections/exhibitions/Hyperlinks/GantTee)
------
nrcha
Well.. It is a requirement in Germany that all bikes 11 kg or over, are fitted
with dynamo powered lights.
~~~
ygra
This says nothing about any auxiliary lights you choose to mount, though.
Still, I guess this thing would be illegal in Germany simply due to the fact
that lighting on a vehicle is _very_ rigorously specified and I guess the
green colour won't fit well with the laws here.
I guess I'll opt for a very bright (hub) dynamo-powered light instead, but my
main concern currently is not visibility but that I can still see the ground
when there's a car coming at me with blinding lights.
(Side note, anecdotal evidence: No cyclist I know ever had problems with the
police for using battery-powered lights instead of dynamo-powered ones. Most
of the time they're happy when cyclists have light _at all_.)
------
mcpie
Imagine this in a country with a lot of bike traffic... 10 ~ 20 flashy green
bikes on the road would decrease safety, not increase it.
|
{
"pile_set_name": "HackerNews"
}
|
Ask HN: How to make an interactive story like NYTimes? - davidtranjs
Today my friend sends me a link to this post: https://www.nytimes.com/interactive/2020/03/22/world/coronavirus-spread.html. I am intrigued by the visual design and the interaction of this post. Is there any Javascript framework allow me to do that. A tutorial would be helpful too. Thanks
======
paulbishop
oh and would you like all this for free?
~~~
davidtranjs
I am looking for resource to learn implementing itmyself.
|
{
"pile_set_name": "HackerNews"
}
|
Are the news media misrepresenting data on how long coronavirus remains viable? - cvk
https://hackeur.life/coronavirus-viability/
======
firatcan
That was the question I am asking at this post just few seconds
ago.[https://news.ycombinator.com/item?id=22697883](https://news.ycombinator.com/item?id=22697883)
Why are they doing this? They have non-sense approach...
BTW, there are really great references on your writing. I'll add those on my
list, thanks :)
~~~
cvk
Not my writing, but thanks.
|
{
"pile_set_name": "HackerNews"
}
|
The Fed stalls the creation of a bank with a novel business model - known
https://www.economist.com/finance-and-economics/2018/09/22/the-fed-stalls-the-creation-of-a-bank-with-a-novel-business-model
======
dannyw
In a competitive market, if the Fed is paying out 1.95% to bank reserves,
there is no reason why my savings account shouldn't pay me that figure minus a
spread for expenses and a profit spread.
~~~
pankajdoharey
Well nothing stops you from investing abroad, For instance Investing in Indian
Bank as a depositor will get you a guaranteed 6-7% return per annum depending
on the bank. Long term deposits have even higher return rates, some smaller
private banks even give as high as 8-9% on deposits.
~~~
deepGem
Holds true only if you convert your currency to INR. The dollar denominated
deposits (FCNR) have far lower yield around 3.4% still better than what the US
banks offer.
[https://www.icicibank.com/nri-
banking/RHStemp/rates.page](https://www.icicibank.com/nri-
banking/RHStemp/rates.page)
But I think you need to hold an Indian passport to open these deposits. I am
not sure though.
~~~
pankajdoharey
Actually even i am not sure about that. There should be some route though,
Foreign Institutional Investors do invest so there should be some legal route
to circumvent this.
------
User23
There is the usual misinformation in this article. Banks don’t loan out
deposits. Originating loans expands their balance sheet. Deposits are useful
to satisfy reserve requirements, when they exist[1], but if the loan is
profitable the bank can always borrow the reserves at a lower rate on the
interbank market.
[1][https://www.federalreserve.gov/monetarypolicy/reservereq.htm](https://www.federalreserve.gov/monetarypolicy/reservereq.htm)
~~~
UncleEntity
Banks have always loaned out deposits, that's just what they do elsewise
they'd be money warehousing operations and there'd be no fractional-reserve
banking cartel.
~~~
klodolph
I think there's a bit of a subtlety in the parent comment that you may be
missing, because the "banks don't loan out deposits" is a reference to
fractional-reserve banking.
~~~
UncleEntity
That's the whole basis of fractional-reserve banking, banks loan out deposits
while keeping a small percentage on hand (last I cared to check it was 10%) to
meet their reserve requirements.
Without deposits banks couldn't loan out anything since they couldn't meet
their reserve requirements and they'd basically be insolvent.
So, true, banks don't loan out 100% of deposits but they certainly do loan out
deposits.
~~~
didgeoridoo
Edit: my comment was misleading and confusing and I’m too tired to fix it
right now so into the bin it goes :)
~~~
klodolph
No, that's not correct at all. It does mean that the back loans out an amount
of money equal to 90% of its deposits. In some sense it's not the "same" money
because the deposit amount is counted as money in its own right... both the
amount of money in the loan and the amount of money in the deposit is counted
as "money supply".
The 1000% figure comes from the fact that the money that the bank loans out is
in turn held in another deposit account somewhere, and so it has a cascading
effect... I deposit $100, my bank loans out $90 to person B, that money gets
saved in a bank which can then loan out $81, then $72.9 gets created somewhere
else. 1000% is just the limit of the sum 1 + 0.9 + 0.81 + ...
~~~
Dylan16807
And to put that all together, it means the bank is responsible for $1000 in
deposits, has $900 owed to it in loans, and has $100 of cash in the vault.
------
stephen_g
This article has a surprising misunderstanding of the way banks work for a
publication called “The Economist”. It is unintuitive, but banks _do not_ and
_can not_ lend out deposits. Banks lever their _capital_ (that is, paid up
shares and retained profits) to lend, and that lending creates new deposits.
Existing deposits and debt funding (money markets etc.) are useful as
liquidity so they can settle interbank transfers, but they aren’t allowed to
lend out of them, both out of regulations like the Basel framework, but also
because it would violate accountancy rules (deposits are liabilities, and you
can’t back a liability with a liability. In reality they create the loan
(asset) with a corresponding liability (the deposit).
The Bank of England (basically the UK federal bank) has published a good
article on this - [https://www.bankofengland.co.uk/quarterly-
bulletin/2014/q1/m...](https://www.bankofengland.co.uk/quarterly-
bulletin/2014/q1/money-creation-in-the-modern-economy)
~~~
zeroxfe
> In reality they create the loan (asset) with a corresponding liability (the
> deposit).
Isn't that just a book-keeping technicality? Fundamentally, the bank is still
using deposits to leverage a loan. Or am I misunderstanding your point?
~~~
empath75
So basically I deposit $1000.
Someone else goes to the bank and says ‘I want to borrow $1000’
Great the bank says, approved. And they set up a bank account for you that
says you have $1000 in it.
Now they’ve got $1000 in reserves and 2 accounts that claim to have $1000 in
it.
Now when you withdraw $500 where does that money come from? I can assure you
it’s not from the banker’s wallet. That’s coming out of my deposit. Not from
my account because my account still says they have $1000 on hold for me, it
comes from the actual cash I deposited. (Which of course isn’t actually cash
in most cases)
None of that matters in practice because unless everyone tries to withdraw all
the money at once there’s plenty to cover all the withdrawals, but they are
definitely using money from deposits to cover withdrawals.
~~~
stephen_g
I think part of the problem to understanding is that you’re assuming that
banks are places that store money. They aren’t really, they are places that
hold various types of assets and manage IOUs between themselves and other
parties (other Banks, the Government/Fed, and customers etc.) in either
direction.
Banks don’t really store money, you give them money and they give you bank
credit, and then later on you exchange that credit back for money or transfer
credit to someone at another bank and the two banks settle that with some
asset (could be reserves etc.).
I think this is the main confusion, of course the physical cash the bank gives
you for a withdraw might have been given to the bank during a deposit
transaction, but in terms of the transactions on the balance sheet it did not
come from there.
------
spectrum1234
I don't understand why banks would give this bank their money. Shouldn't they
be getting this same rate from the Fed? Something is left out.
~~~
dmurray
I think their target customers are not banks. The article says it would "take
deposits from financial institutions" but this could mean insurance companies,
pension funds, credit unions, securities exchanges, etc. Anyone who might have
substantial cash reserves as an integral part of their business, but who does
not actually have a banking license.
Edit: from the Matt Levine piece linked further down the comments, the target
customers are "money-market funds and foreign central banks".
------
captainmuon
Unless I'm missing something, why doesn't every bank have an "internal narrow
bank" that takes deposits and puts them at the central bank?
Actually, I thought that is exactly how banks have to operate right now... All
deposits are settled in the interbank system or ultimately with the central
bank. You can't just deposit money in a bank, because "depositing" $100 means
telling an upstream bank that your bank now owes you $100.
~~~
incompatible
They can do, quite a lot of money is on deposit at the Fed. But they may find
it more profitable to lend the money elsewhere. All true government issued
money is either in the form of banknotes and coins or deposits held at the
central bank.
~~~
pas
What is gov issued money? The money created by the fed is gov issued too, no?
And it sits on the Fed's balance sheet as liability, debt, but not as deposit.
~~~
incompatible
For simplicity I'm taking the Fed as part of the government. I know the setup
in the USA is a bit strange. A government can create as much money as it
likes, debt on a balance sheet is an optional accounting detail.
~~~
thelasthuman
So the federal reserve isn't part of the government?
------
apo
_... The central bank may worry that narrow banks, which lend to neither
companies nor individuals, could hamper the effectiveness of monetary policy.
Their business model may also risk unsettling incumbent banks, which could
have large economic consequences. ..._
This is almost certainly a big reason for the opposition. The Federal reserve
wants to keep people from "hoarding" (saving) money at all costs. Driving down
interest rates and causing inflation is one tactic. Preventing the emergence
of banks who can out-compete the rest of the market on interest rates is
another.
------
cryptonector
> The Narrow Bank would take deposits but not make loans
That can't work, not if you scale it up across the industry. The Fed could not
continue to pay interest even on overnight deposits without it directly
turning into inflation. The economy could not function without loans.
Alternatively the Fed would have to become the one (and only) lender for
commercial (and muni, and...) purposes so that it could make the interest
income on those with which to service deposits from these "narrow banks". A
narrow bank is essentially free-loading, and politically untenable.
~~~
konschubert
Matt Levine writes about this in his newsletter:
[https://www.google.de/amp/s/www.bloomberg.com/amp/view/artic...](https://www.google.de/amp/s/www.bloomberg.com/amp/view/articles/2018-09-06/fed-
rejects-bank-for-being-too-safe)
BTW, it's a really great newsletter, always entertaining and has been teaching
me a lot about the world of finance. His take on the Tesla scandal is
hilarious.
~~~
sah2ed
> _His take on the Tesla scandal is hilarious._
If you are curious as I was, here is the link:
[https://www.bloomberg.com/view/articles/2018-08-13/funding-f...](https://www.bloomberg.com/view/articles/2018-08-13/funding-
for-elon-musk-s-tesla-buyout-wasn-t-so-secure)
~~~
konschubert
That's not the one I meant, he's also talking about Tesla in his newsletter.
Probably still worth reading.
------
twic
Why is it so crucial to their business model that the customers' deposits are
deposited directly with the Fed, rather than lent out to other banks on the
overnight Fed Funds market? The Fed Funds rate is typically lower than the
IOER rate paid on deposits with the Fed, but only by a small amount:
_since early 2009 the fed funds rate has generally been 5 to 20 basis points
(one basis point is equal to 0.01 percentage points) lower than the IOER_ [1]
For example, money lent out on the Fed Funds market on thursday earned 1.92%
[2], while money deposited with the Fed earned 1.95% [3].
The Not-Quite-So-Narrow Bank wouldn't be able to pay as high a rate of
interest to depositors as the true Narrow Bank, but it also wouldn't be
dependent on the cooperation of the Fed, and Congress's authorization to the
Fed, in collecting the IOER rate.
While looking up those numbers, i came across a nice detailed series of posts
by George Selgin on IOER [4] and the Narrow Bank in particular [5] [6] - well
worth a read.
[1] [https://www.stlouisfed.org/publications/regional-
economist/a...](https://www.stlouisfed.org/publications/regional-
economist/april-2016/interest-rate-control-is-more-complicated-than-you-
thought)
[2]
[https://apps.newyorkfed.org/markets/autorates/fed%20funds](https://apps.newyorkfed.org/markets/autorates/fed%20funds)
[3]
[https://www.federalreserve.gov/monetarypolicy/reqresbalances...](https://www.federalreserve.gov/monetarypolicy/reqresbalances.htm)
[4] [https://www.alt-m.org/2017/06/01/ioer-and-banks-demand-
for-r...](https://www.alt-m.org/2017/06/01/ioer-and-banks-demand-for-reserves-
yet-again/)
[5] [https://www.alt-m.org/2018/09/10/the-skinny-on-the-narrow-
ba...](https://www.alt-m.org/2018/09/10/the-skinny-on-the-narrow-bank/)
[6] [https://www.alt-m.org/2018/09/14/the-narrow-bank-a-follow-
up...](https://www.alt-m.org/2018/09/14/the-narrow-bank-a-follow-up/)
~~~
srgseg
Because then you have counterparty risk, which means you then have greater
regulatory/compliance costs and have to pay up to 0.4% on assets for FDIC
insurance.
~~~
twic
Counterparty risk on Fed Funds loans, which are for one day only and to solid,
highly regulated financial institutions, is negligible - that's why it's
treated as the risk-free rate.
Would you need FDIC insurance if you only took deposits from financial
institutions?
~~~
srgseg
> Why is it so crucial to their business model that the customers' deposits
> are deposited directly with the Fed
Note that whether interest was earned via IOER or via the Fed Funds market,
the funds would be deposited with the Fed either way (whether in the bank's
own Fed account or overnight in the Fed account of the counterparty of a Fed
funds loan).
It wouldn't be possible to accept transfers from other banks, to send funds to
other banks or to participate in the Fed Funds market unless the bank had an
account with the Fed.
On the point of "risk-free", the TED spread was north of 1% for much of the 20
months between August 2007 and April 2009, and peaked at over 4%. So it's not
always what I'd call negligible risk.
[https://fred.stlouisfed.org/series/TEDRATE](https://fred.stlouisfed.org/series/TEDRATE)
------
ttul
The Fed is probably concerned that, if the idea catches on, trillions in
deposits could be vacuumed out of the fractional reserve banking system,
leaving less available for lending.
If that were to happen, the Fed would lose considerable influence over
liquidity. Its only lever on narrow banks would be to adjust the deposit rate
encouraging savers to go elsewhere.
------
known
[https://archive.st/archive/2018/9/www.economist.com/w316/](https://archive.st/archive/2018/9/www.economist.com/w316/)
------
lesserknowndan
Debt is created not money. You can’t pay the ferryman with an IOU from a bank.
~~~
goodcanadian
Actually, that is pretty much exactly how it works. Banknotes (i.e. cash), are
explicitly IOUs from a bank. Usually, it is a central bank, such as the Bank
of England, but sometimes, it could be another bank. There are three retail
banks in Scotland that are authorised to issue banknotes:
[https://en.m.wikipedia.org/wiki/Banknotes_of_Scotland](https://en.m.wikipedia.org/wiki/Banknotes_of_Scotland)
------
seibelj
I'm sure this is an interesting article, but not only is my "article limit"
reached, but I even can't login to read it.[0] Which is strange, given the
hundreds of dollars per year I give The Economist to support them.
If you want to stop people from stealing your articles, maybe give the paying
readers access would be a start.
[0] [https://imgur.com/a/2uo4hnE](https://imgur.com/a/2uo4hnE)
~~~
topmonk
I will set you free: [http://archive.is/JYM3P](http://archive.is/JYM3P)
~~~
seibelj
That's fine and all, but I am happy to pay for top-notch journalism. But even
me, the paying customer, is locked out.
------
comboy
A good starting point for those who want to learn something about the FED:
[https://www.youtube.com/watch?v=mQUhJTxK5mA](https://www.youtube.com/watch?v=mQUhJTxK5mA)
Just the basics, but I doubt these are obvious to everybody.
|
{
"pile_set_name": "HackerNews"
}
|
Webhooks, upload notifications and background image processing - nadavs
http://cloudinary.com/blog/webhooks_upload_notifications_and_background_image_processing
======
nadavs
This blog post details how you can use Cloudinary to perform asynchronous
background image processing in the cloud and receive web notifications when
uploading and image manipulation are completed. Sample code in PHP, Python &
Django and Ruby on Rails is included.
|
{
"pile_set_name": "HackerNews"
}
|
IPhone 5 Prototype found at a bar - itzthatiz
http://www.csmonitor.com/Innovation/Horizons/2011/0901/Secret-iPhone-prototype-left-at-a-bar-again
======
grecy
Title is misleading... the device has not been confirmed found, only missing.
------
brk
tl;dr: it's possible that some piece of hardware roughly the size of an iPhone
might have been left at a local bar. Nobody has any concrete details of the
device beyond that.
|
{
"pile_set_name": "HackerNews"
}
|
Lost Ancestors of ASCII Art - fescue
http://www.theatlantic.com/technology/archive/2014/01/the-lost-ancestors-of-ascii-art/283445/
======
memracom
Look down at the picture of the Siamese cat. There is and example of Run
Length Encoding. I expect that some of the other source materials will also
show Run Length Encoding early enough that it would have invalidated a number
of patents as prior art. But unfortunately, at the time these patents were
being enforced, we didn't have such a rich Internet to use to find this info.
Hopefully people will keep up the task of digitizing the past so that these
ideas are not lost. The people of the past were more like us than we imagine.
|
{
"pile_set_name": "HackerNews"
}
|
Is any one else extremely annoyed by Google's Android privacy bullying? - lumberjack
Ever time I turn on the GPS tracking Google asks if I would like to let them have my location to enhance the location precision. I say no. Fine. I cannot disable the dialog for whatever reason. I then try to turn location tracking off from the desktop widget and it doesn't work for whatever reason. I have to get into the settings menu to do that.<p>I want to search something, Google Keyboard keeps track of every word I make. Sometimes I press the wrong thing and suddenly my own smartphone is recording me and sending the data over to Google.<p>I take some photos and Google prompts me to save them to Drive. No I don't want them uploaded to your servers. I have Google apps pre-installed and for some reason I cannot remove them. I disable them and Google Play updates them and re-enables them.<p>Are normal people really OK with all of this?<p>You don't have to be somebody important to find this extremely invasive. In two years I might be working in a lab developing solar panel tech. Do I really want to have this Android device trying to record my every movement?<p>It's Android 4.4.4 if anyone's wondering with an almost stock OEM install.<p>I'm going to put CyanogenMod on it when I have time but it really annoys me that the market is in such a state that most people don't give a crap about all this invasive tracking.<p>I mean, lawyers use this stuff. They are aware not to use Gmail when sensitive information needs to be communicated but are they aware of just how much data their phone collects about them?
======
ionised
CyanogenMod is a lot better for the issues you have.
It has built-in Privacy Guard giving you granular control of all app
permissions.
It also comes completely free of Google Apps, which you can download a very
minimal version of if you need it. The version I use comes only with Play
Store and the Framework to make it work. I do use F-Droid for apps whenever
possible though.
I'll never install a stock Google or provider ROM ever again. It's such a
hassle to maintain control over what it's doing.
Honestly I'm kind of hoping Firefox OS makes good on its goals. I'll switch in
a heart beat if it does.
------
_RPM
Somewhat related, but I'm annoyed at the Android overall. It seems I can
choose a default "protocol handler" for _every_ app that wants to open a
specific type of file. For example, in every new app that opens an HTTP
application, Android asks me which app I want to use, Firefox, or Chrome?, It
then reminds me that I can change the defaults in "some > place".
------
erasmuswill
Just use Google End To End for sensitive communications. As for the rest, I
love Google's ecosystem, although it is a huge privacy risk. It's convenient
to have everything linked up and "just work".
------
ocdtrekkie
Yup. People need to realize that Google Play Services is effectively malware.
It steals your data, invades your privacy, tracks your location, and shortens
your battery life.
|
{
"pile_set_name": "HackerNews"
}
|
A super-thin slice of wood can be used to turn saltwater drinkable - based2
https://www.newscientist.com/article/2212346-a-super-thin-slice-of-wood-can-be-used-to-turn-saltwater-drinkable/
======
abdullahkhalids
20 kg/m^2/h seems enough for a house near the sea to be self-sufficient. You
could pump salty ground water and filter this through such a setup. A person
uses about 350-450 liters of water every day. And 20 kg/m^2/h * 1 m^2 * 24 h =
480 liters. So one square meter is sufficient for a single person.
~~~
jaclaz
You are IMHO a tad bit optimistic, the process needs power, so - usually in
remote areas - it is solar powered, and that - unless you add the complication
of energy storage/batteries, limits the output to a much smaller number of
hours per day.
See (as an example):
[https://en.wikipedia.org/wiki/Membrane_distillation#Solar-
po...](https://en.wikipedia.org/wiki/Membrane_distillation#Solar-
powered_membrane_distillation)
On the other hand, 350-400 lt per person is on the very high side of water
consumption in a very "first world" scenario (such as the US) a more real
estimate is 150-200 lt per person per day this is UK:
[https://www.ccwater.org.uk/households/using-water-
wisely/ave...](https://www.ccwater.org.uk/households/using-water-
wisely/averagewateruse/)
Germany and Spain:
[https://water-for-africa.org/en/water-consumption.html](https://water-for-
africa.org/en/water-consumption.html)
[https://www.researchgate.net/figure/Water-consumption-
liters...](https://www.researchgate.net/figure/Water-consumption-liters-per-
person-per-day-reprinted-with-permission-from-3-Copyright_fig1_267434851)
With a minimum of attention/care 100 lt per day per person can be enough.
And a large part of this is of course not drinking water, so it needs less
desalinisation.
In a "built today" home, the water flushed from the basin/shower/bath (which
is relatively clean) is re-used to supply the toilet and - in some cases - the
washing machine.
Newish toilets have typically double flush at 6 and 9 liters each, so if you
re-use only those it is likely to be in the 30 liters per day per person
saving.
~~~
abdullahkhalids
Thanks for the detailed answer. I am from a developing country and my water
usage is indeed closer to 150l than 400 l. Personally, this peaked my
attention because I moved to a new city just three days ago; the city is on
the coast and struggles with water supply. I was wondering how expensive such
a system is today.
I did think of energy usage but didn't find easy number to quote. The wiki
link suggests 6.5 m^2 of thermal concentrate and 75 W solar panel is enough
for the energy needs of 150l/day in 2011. This is not that difficult for a
remote or urban house, and will cost less than a few hundred dollars
[https://kenbrooksolar.com/price-list/solar-water-heaters-
pri...](https://kenbrooksolar.com/price-list/solar-water-heaters-price)
I don't know the capital costs of membrane distillation.
~~~
jaclaz
Well, if we use a "normal" (non solar) plant, using this as a base comparison:
[https://www.lenntech.com/applications/emergency-seawater-
des...](https://www.lenntech.com/applications/emergency-seawater-desalination-
units.htm)
it's 15 KW for 2000 lt/hour, assuming the efficiency scales linearly (I
strongly doubt it does), a 40 lt/hour that would be 300 W (which I don't
believe), it is more likely to be in the (not very accurate measure) "around 1
kW" range, which in laymans terms should mean some 6 or 7 sqm of solar panels,
possibly a little bit more.
About the cost, you can buy today commercial systems (only the desalination
part) in the US$ 5,000 range:
[https://www.echotecwatermakers.com/beach_house_desalination_...](https://www.echotecwatermakers.com/beach_house_desalination_systems.htm)
The smallest model has 5.4 A @230V which seems just in line with what we
calculated above, around 1 kW.
If we assume that the cost of the needed solar plant (the additional part
needed to supply the desalination plant) would be around 500 US$ or less, the
price to beat is 5,500 US$, but considering that added solar power surface
produces power that might be used, I would say that a more reasonable cost
would be below US$ 4,000, which is still a lot.
~~~
abdullahkhalids
Thanks for the detailed answer again. Clearly, there exists a business
opportunity to innovate on these systems, make them cheaper and sell them in
my new city.
Both the systems you link to are reverse osmosis systems which require
electric power to run the pumps etc. Solar radiation -> electric power is only
20% efficient. My understanding is that membrane distillation tech uses hot
water (60 degrees in OP link), which can run on solar thermal systems + small
amount of PV. Solar radiation -> thermal energy of water can be 80%+
efficient, so the price of the whole system might come down significantly from
the $5000.
~~~
jaclaz
Yep, but it was a comparison aimed to the real world and costs, not abstract
efficiency.
Given that you need for both 5-10 square meters of surface exposed to the sun
and assuming (and it is not necessarily given) that solutions "A" and "B" have
the same "current" (maintenance, spares, consumables) costs, I don't care if
solution "A" is more efficient than solution "B", I only care if solution "A"
costs less than "B".
If it does, then it should cost much less, as solution "B" might have a
tangible advantage (once you have purified enough water the solar energy can
be used for other uses).
Now, be nice, check the actual Solar Spring Gmbh site /the spin-off of the
Fraunhofer Institute for Solar Energy Systems mentioned in the wikipedia
article ), and see how - strangely enough - they build/sell both "A" and "B"
solutions:
[https://solarspring.de/en/products-and-
services/#pg-356-2](https://solarspring.de/en/products-and-services/#pg-356-2)
[https://solarspring.de/en/solar-
purification/](https://solarspring.de/en/solar-purification/)
In any case high efficiency - usually - means more sophistication, added
components, and what not, so rarely exists a solution that is more efficient
AND costs less.
------
Someone
As always with filters, there’s the question of longevity. FTA: _”The water
vapour then travels through the pores in the membrane toward its colder side
and leaves the salt behind”_
⇒ chances are that salt stops this from functioning after some time. How much
time, and how easy is it to remove the salt, clearing the filter?
|
{
"pile_set_name": "HackerNews"
}
|
JSON Labs Release: Native JSON Data Type and Binary Format - johnchristopher
http://mysqlserverteam.com/json-labs-release-native-json-data-type-and-binary-format/
======
ape4
I feel as soon as every database supports JSON the world will move to a
different data format. Like I feel a bit sad for Scala having native XML.
~~~
mycelium
XML can't piggyback on the success of anything. It's just a human readable
data format, and a cumbersome one at that. It make sense in certain use cases
and not in others, but it has to fight it out with other data formats on its
own weak merits.
JSON is the object literal format for Javascript, likely the single most
widely deployed and used programming language in the world. Until paradigm
shifts obsolete the web browser, JSON will be ubiquitous.
~~~
taeric
This is.... an interesting perspective. Why couldn't xml "piggyback" off of
the success of html, for example?
~~~
shaneofalltrad
Good point, but xml is "cumbersome" or hard and json is not (in the eyes of
many). xml was widely hated before json become widely accepted while json is
not as widely hated.
~~~
taeric
I definitely agree that that is the perception. So, apologies to all if my
tone earlier implied any condescension.
What intrigues me is why is it so. There is a great article in one of the
Programming Pearls books (so, published earlier), that goes over how providing
provenance in a data format is highly useful. Yet, by and large, you can not
do this in JSON, because comments are disallowed.
------
threeseed
Such a strange world we live in.
NoSQL databases are adopting standard SQL interfaces and becoming more
uniform.
SQL databases are adopting their own JSON interfaces and becoming more
proprietary.
~~~
ryanpetrich
NoSQL databases are implementing features their users are requesting.
SQL databases are implementing features their users are requesting.
------
mathnode
JSON support is also provided by the CONNECT storage engine, which I think is
much simpler.
[https://mariadb.com/kb/en/mariadb/connect-json-table-
type/](https://mariadb.com/kb/en/mariadb/connect-json-table-type/)
e.g:
SQL > CREATE TABLE junk.j1 (a int default null)
ENGINE=CONNECT DEFAULT CHARSET=utf8
table_type=JSON
File_name='j1.json';
SQL > insert into junk.j1 (values) (1);
$ ls $(mysql -NBe "select @@datadir")/junk
db.opt j1.frm j1.json
$ cat $(mysql -NBe "select @@datadir")/junk/j1.json
[
{
"a": 1
}
]
I use this for generating config files, and getting data into pydata tools; no
need for a database driver, which is interesting.
------
hliyan
They should lead with this revelation:
You can... [create] indexes on values within the JSON columns
Ability to index changes everything.
~~~
eknkc
How does pg handle this? I believe it has GIN index support on JSON columns
but that can not do range queries. Is it possible to use a functional index in
postgresql on JSON types too?
~~~
hliyan
I just skimmed the documentation[1]. Doesn't look look like it:
8.14.4. jsonb Indexing
...
However, the [GIN] index could not be used for queries like the following:
-- Find documents in which the key "tags" contains key or array element "qui"
SELECT jdoc->'guid', jdoc->'name' FROM api WHERE jdoc -> 'tags' ? 'qui';
[1]: [http://www.postgresql.org/docs/9.4/static/datatype-
json.html](http://www.postgresql.org/docs/9.4/static/datatype-json.html)
~~~
saurik
To do a range query you would need to use a BTree index (or a GiST index that
is essentially configured to simulate a BTree) over the extracted value, not
on the column itself, value (exactly as is possible now in MySQL as of two
months ago, as they finally caught up and got a way to do functional indexes,
specifically designed for this use case). PostgreSQL has been capable of this
essentially forever (if you really needed to do this ten years ago, one line
of plpython to parse the JSON and extract the field would have worked
perfectly and built a perfectly efficient index; just no one wanted to store
this kind of stuff until MongoDB tried to push it as the future, and so no one
cared to bother doing writeup on how to do that).
------
ImJasonH
Can anybody explain why the function is "jsn_extract" instead of
"json_extract"? I thought it was a typo at first...
~~~
morgo
There may be some chance that json_* functions become a standard, but have
different call parameters.
This design allows MySQL to implement both without the backwards compatibility
concern.
------
ris
We already have a not-particularly-transaction-safe JSON database in MongoDB.
When postgres added JSON operations it brought the sanity and robustness of
its database engine to the wild west of document stores.
What's MySQL bringing?
~~~
dozzie
The usual thing, the try to catch up with Postgres from five years ago.
~~~
threeseed
PostgreSQL only supports indexing for JSONB so actually MySQL is ahead here.
~~~
ris
What functional difference does this make? Do you really want to keep your
JSON formatting? My guess is all MySQL does is create a shadow JSONB-like
format either in-table or in the index.
~~~
threeseed
According to here: [http://www.postgresql.org/docs/9.4/static/datatype-
json.html](http://www.postgresql.org/docs/9.4/static/datatype-json.html)
PostgreSQL doesn't preserve key ordering and strips multiple key/values pairs.
I know some systems we have insist on JSON documents having a set order
sequence. Also there are often legal reasons why you need the raw, unmodified
data preserved.
~~~
mbesto
> _Also there are often legal reasons_
If you business has legal implications on your database format, then why would
you trust a NoSQL implementation?
For example - financial transactions:
[https://twitter.com/seldo/status/413429913715085312](https://twitter.com/seldo/status/413429913715085312)
------
ademarre
Neat. I don't see any mention of it, but it would be nice if it also supported
JSON PATCH for UPDATEs:
[https://tools.ietf.org/html/rfc6902](https://tools.ietf.org/html/rfc6902)
------
ccleve
Does anyone have a pointer to the actual internal binary format? There have
been a zillion attempts to do this efficiently (JSONB, BJSON, BSON, ubjson,
MessagePack) and I'm curious which one they chose.
------
el33th4xx0r
Now, i'm waiting for sqlite to add native support on json & xml data type
------
jstsch
Nice! Will this also be a feature of MariaDB?
~~~
michaelmior
I did a bit of digging because I was curious to the answer for this question.
My understanding is that the closest thing MariaDB has currently is dynamic
columns[0]. Essentially they allow you to have arbitrary key-value pairs
associated with each row in a table. According to a talk from a couple years
ago[1], it seems like the plan is to use this as a stepping stone toward JSON
support. Note that MariaDB also supports virtual columns[2] which could be
used to create indexes over dynamic columns.
[0] [https://mariadb.com/kb/en/mariadb/dynamic-
columns/](https://mariadb.com/kb/en/mariadb/dynamic-columns/)
[1] [http://www.slideshare.net/blueskarlsson/using-json-with-
mari...](http://www.slideshare.net/blueskarlsson/using-json-with-mariadb-and-
mysql)
[2] [https://mariadb.com/kb/en/mariadb/virtual-computed-
columns/](https://mariadb.com/kb/en/mariadb/virtual-computed-columns/)
------
Thiz
Codd must be rolling in his grave.
I am speechless.
~~~
chubot
I get what you're saying, but these JSON features are actually indicative of a
deficiency in the relational model:
The relational model has no performance model. The existence of ever more
complicated query optimizers proves this. This is fundamental engineering
issue, and NoSQL and JSON in MySQL are engineering hacks that address this
issue for specific problems.
It's also a big usability issue. Even if you can tune your queries, indices,
and schemas to remove a given performance bottleneck; it could be beyond the
skills of users. NoSQL and JSON are indeed simpler.
So it would be nice to come up with a better model instead of one-off hacks,
but it's hard. I think it would be cool if you could exhaustively enumerate
the queries an application makes, and then the database would somehow generate
the schema and indices for you, and also give the time complexity bounds for
the queries. But there is kind of a chicken and egg problem there, because the
queries depend on the schema.
|
{
"pile_set_name": "HackerNews"
}
|
Three Introductory Textbooks I Hope to Read Someday - gms
http://ghalib.me/blog/three-introductory-textbooks-i
======
shire
I'm reading the Stewart series for Calculus, he moves to fast I feel.
------
e3pi
Concrete Mathematics is a new title to me, your buoyant thrill on Euler font
and making a concrete floor() and ceil(), let me guess, Knuth? Sold me.
~~~
gms
It is an excellent textbook for sure.
|
{
"pile_set_name": "HackerNews"
}
|
Stupid question: why is it legal for the NSA to perform MITM attacks? - appleflaxen
I get it that some degree of latitude to break the law is justified when enforcing the law (such as a police officer running a red light when pursuing a criminal). But those types of permitted transgressions are carefully controlled by police department policy. What is the legal authority by which the NSA masquerades as a major internet service? Why isn't this criminal "hacking"/unauthorized access held to the same standard Aaron Swartz was?<p>It seems like even if they don't need a search warrant (vis-a-vis the presumed FISA court permission), MITM is still <i>illegal</i>, and they would need special permission for <i>that</i>.<p>Don't think I'm asking the question very well, but is it implicit? Explicit? And if explicit, where is it codified?
======
bediger4000
_What is the legal authority by which the NSA masquerades as a major internet
service?_
I think that you're asking that question very clearly and simply. I also think
it needs a clear and simple answer, but I doubt we'll see one. The USA's legal
environment allows what we in the Real World call "hair splitting", and
further, redefines everyday terms, and allows the use of the redefined term in
a deceptive manner.
Not to justify the NSA in any way (I think the NSA should be abolished) but I
imagine they do have some legal justification. They won't be forthcoming about
it, and if you read it, you'd be amazed at the interpretation the NSA would
have to do to allow itself to do MITM attacks. That's just a cynical guess on
my part, not a defense of the action.
------
skidoo
I think in today's world, not only are corporations/collectives people too,
but they have far more rights than actual individuals in every conceivable
manner. Regarding the cloak and dagger stuff, there is no real legal
justification, and I believe that is why they insist on fighting any actual
transparency. Laws today are retroactively rewritten to further protect those
guilty. The children are running the candy shoppe.
And as such are calling for a rather profound spanking.
|
{
"pile_set_name": "HackerNews"
}
|
Building digital libraries in Ghana - cmod
https://medium.com/message/ebooks-for-all-b23d2d8e63b8
======
zhapen
how to support?
|
{
"pile_set_name": "HackerNews"
}
|
The New York Times Expected To Launch Local Blog Network On Monday - transburgh
http://www.techcrunch.com/2009/02/27/new-york-times-expected-to-launch-local-blog-network-on-monday/
======
brandnewlow
Well, other major papers have talked about doing this for a while. Makes sense
for the Times to try it out. The Boston Globe has something like this already
and WickedLocal.com has been doing pretty well in the Boston area as well.
The equation: NYTimes Brand + 1 Nytimes editor + free copy from CUNY j-school
+ user-generated content.
Interesting stuff.
~~~
JoelSutherland
I wrote a post a few weeks about about ESPN doing the same for their
basketball coverage. I really think this is the direction things are headed:
[http://www.newmediacampaigns.com/page/espn-launches-a-
blog-n...](http://www.newmediacampaigns.com/page/espn-launches-a-blog-network)
~~~
brandnewlow
ESPN's actually launching city-specific sites now, starting, of course, with
Chicago.
Every big media company is making a local content play right now. I get
e-mails every month from someone developing the next one asking me if I'd like
to write for them for free in exchange for exposure.
------
alabut
Interesting idea. Clay Shirky, Jay Rosen and others have speculated for long
time now that journalists will all morph into editors rather than the creators
of content, picking and choosing amongst both their own writing but also that
of the audience in the form of their blog posts. David Pogue already does a
form of this by cherrypicking the best comments from his massively popular
blog posts and articles.
This will probably make more progress than one of the other newspaper-related
HN posts today, the one about Newsday starting to charge for articles:
<http://news.ycombinator.com/item?id=497340>
------
albertsun
I wonder how much play these will get on the Times homepage, and how much
editorial independence they'll have from the main newsroom. It's possible that
these could end up hurting the core brand by putting it on lower quality
content.
If I were doing this I'd put the Times logo small and off in the corner and
try to give each local blog its own identity.
|
{
"pile_set_name": "HackerNews"
}
|
Faster R-CNN: Down the rabbit hole of modern object detection - vierja
https://tryolabs.com/blog/2018/01/18/faster-r-cnn-down-the-rabbit-hole-of-modern-object-detection/
======
rambossa
Does anyone try to get accurate bounding boxes (rotation, correct angle) with
these object detection models? Or does the greatly harden the problem?
~~~
electrograv
That’s exactly what Faster-RCNN does. Edit: Except for rotation — they are
axis aligned bounding boxes.
Mask-RCNN (more recent) takes it a step further and also generates a per-
object pixel segmentation mask, which is even better than a bounding box
obviously. For that reason, Mask-RCNN is much more exciting to me, and
incredibly impressive if you see examples showing what it can do.
That said, “under the hood” of Mask-RCNN are still axis aligned 2D bounding
boxes for every object (and this occasionally creates artifacts when a box is
erroneously too small and crops off part of an object). IMO we need to somehow
get away from these AABBs, but right now methods that use them simply work the
best.
------
nicodjimenez
Object detection is an interesting failure for deep learning. Systems such as
these perform well but whenever you have something like non max suppression at
the end you are bound to get hard to fix errors. I'm more optimistic about
deep mask and similar pixel wise approaches as well as using RNNs to generate
a list of objects from an image.
------
swframe2
I saw this today:
[https://github.com/facebookresearch/Detectron](https://github.com/facebookresearch/Detectron)
------
nnq
wansn't R-CNN already superseded by YOLO[1]? didn't read the article, but no
mention of it to compare itself to, so seems outdated maybe.
anyone had the time to dig deeper into this?
[1]
[https://pjreddie.com/media/files/papers/yolo.pdf](https://pjreddie.com/media/files/papers/yolo.pdf)
~~~
eggie5
Tradeoffs: RCNN has better accuracy. YOLO is faster.
~~~
pilooch
rcnn is two steps and ssd is single step.
------
BillyParadise
Is this what they use for self driving cars?
~~~
bitL
Faster R-CNN gives you only like 5fps on high-end GPU, so answer is no.
|
{
"pile_set_name": "HackerNews"
}
|
Erlang programmer’s view on Curiosity Rover software - deno
http://jlouisramblings.blogspot.com/2012/08/getting-25-megalines-of-code-to-behave.html
======
pron
I absolutely love Erlang and think that, along with Clojure, it provides a
complete ideology for developing modern software.
But the article implies (and more than once) that the rover's architecture
borrows from Erlang, while the opposite is true. Erlang adopted common best
practices from fault-tolerant, mission-critical software, and packaged them in
a language and runtime that make deviating from those principles difficult.
The rover's software shows Erlang's roots, not its legacy.
~~~
nirvana
How is that possible since Erlang dates from the early 1980s, and the Rover's
OS is from the 1990s?
~~~
makmanalp
Almost all spacecraft software is written a in similar fashion, not just
Curiosity's.
------
Tloewald
Back in the 90s there was a software engineering fad (unfair term but it was
faddish at the time) called the process maturity index, and JPL was one of two
software development sites that qualified for the highest rank (5) which
involves continuous improvement, measuring everything, and going from rigorous
spec to code via mathematical proof.
This process (which Ed Jourdan neatly eviscerated when applied to business
software) produces software that is as reliable as the specification and
underlying hardware.
~~~
vonmoltke
It may be a fad for the industry at large, but it's a requirement in US
government contracting (as CMMI). It goes beyond software, too. My former
employer just got their systems engineering up to CMMI Level 5[1] and was
working hard on getting electrical engineering there (they are only at 3).
[1] Software had been there for a few years.
------
1gor
Any _robust_ C program contains an ad-hoc,
informally-specified, bug-ridden, slow
implementation of half of Erlang...
<http://c2.com/cgi/wiki?GreenspunsTenthRuleOfProgramming>
~~~
gruseom
But in this case that's almost completely wrong. For example, "bug-ridden"?
Outfits like NASA use classical techniques (code inspection etc.) to ensure
that their software has exceedingly low error rates. This has been well
studied. Such an approach works, it's just too expensive for most commercial
projects. As for "slow", how likely is that?
On another note, it's pretty cool that the first three names credited in the
JPL coding standard document (which is linked to at the bottom of the OP and
is surprisingly well written) are Brian Kernighan, Dennis Ritchie, and Doug
McIlroy.
~~~
GuiA
>But in this case that's almost completely wrong.
On the second line of linked article:
"This is a _humorous_ observation"
(emphasis mine :) )
~~~
gruseom
Humor doesn't make it applicable. Greenspun's line had a specific meaning. It
doesn't stick to this surface.
------
donpdonp
"Recursion is shunned upon for instance,...message passing is the preferred
way of communicating between subsystems....isolation is part of the coding
guidelines... The Erlang programmer nods at the practices."
Best "Smug Programmer" line ever.
------
rubyrescue
Great article. The only thing he left out is the parallel to Erlang Supervisor
Trees, which give the ability to restart parts of the system that have failed
in some way without affecting the overall system.
------
matthavener
The biggest difference to Erlang is VxWork's inability to isolate task faults
or runaway high priority process. (Tasks are analogous to Processes in
Erlang). VxWorks 6.0 supports isolation to some degree, but it was released in
'04, after the design work on the rover started. Without total isolation, a
lot of the supervisor benefits of VxWorks goes away.
~~~
vbtemp
Hm.. What do you mean by isolating task faults? I think a lot of that depends
on the underlying hardware, right (e.g., if the board has an MMU)? I know you
can insert a taskSwitchHook (I think it's called) that could be able to detect
and kill runaway high-priority processes.
Edit: in response to the reply, I suppose I should have mean tasks instead of
"processes" (which in VxWorks would be the RTP)
~~~
noselasd
VxWorks has no processes[1]. It has tasks. Basically, you write kernel code,
there's no user mode.
[1] As someone mentioned, VxVorks 6 did introduce processes and "usermode",
called RTP. As with most features of VxWorks, you compile that into your image
if you want the feature. But there's a lot of inertia, and much of the VxWorks
stuff I see doesn't use RTP yet.
------
vbtemp
The motivation for writing the software in C is this: Code Reuse. NASA and
it's associated labs have produced some rock solid software in C. In space
missions commonly the RAD750 is used (with it's non-hardened version, the
MCP), along with the Leon family of processors. Test beds and other ground
hardware are often little-endian Intel processors. VxWorks is commonly used on
many missions and ground systems, but so is QNX, Linux, RTEMS, etc... The only
common thing the diverse set of hardware, operating systems, and compiler tool
chains all support is ANSI C. This means that nifty languages like Erlang or
whatever - though there may be a solid case for using them - is not practical
in this circumstance.
I know some clever folks in the business have done interesting work on ML-to-C
compilers, but it's still in the early R&D phase at this point - the compiler
itself would have to be thoroughly vetted.
~~~
jensnockert
I didn't read it as arguing against C, just noting that there seems to be a
lot of commonality between the ways the code in the Mars rovers are designed,
and the way that robust Erlang applications are typically designed.
~~~
jlouis
Precisely. One thing very must against using Erlang for this problem is that
you need hard real-time behaviour. Erlang does not provide that. The other
point, you need static allocation almost everywhere, is also detrimental to
using Erlang for the Rovers.
That leaves you with very few languages you can use, and C is a good
predictable one for the problem. Its tool support is also quite good with
static verification etc. And it is a good target for compilation. As someone
else notes, most of those 2.5 Megalines are auto-generated.
------
pgeorgi
"We know that most of the code is written in C and that it comprises 2.5
Megalines of code, roughly[1]. One may wonder why it is possible to write such
a complex system and have it work. This is the Erlang programmers view."
Contrast this with <https://www.ohloh.net/p/erlang>: "In a Nutshell, Erlang
has had 7,332 commits made by 162 contributors representing 2,346,438 lines of
code"
I'm not sure if those roughly 154kloc really make a difference...
~~~
deno
The 2.5 MLOC of NASA code is mostly generated.
I’m not really sure what point are you trying to make…?
~~~
pgeorgi
From an erlang programmer's point of view, 2.5MLOC are a complex system.
On the other hand, every Hello World in Erlang drags in about 2.5MLOC of
liability (even if much of that is never run). And I doubt it's all
autogenerated.
So if anything, 2.5MLOC of generated NASA code is probably less complex than
the erlang runtime.
~~~
deno
The article is not about the MLOC. And anyway, that 2.5 MLOC of Erlang
includes all kind of libraries, like Mnesia, an entire database application.
------
jeremiep
Great article! I'd like to add that the D programming language also offers a
lot of features to create robust code with multiple paradigms, although the
syntax is heavily C oriented rather than functional.
'immutable' and 'shared' are added to the known C 'const' qualifier for data
that will never change (contrary to not changing in the declaring scope only)
data which is shared across threads, everything else is encouraged to use MPI
using the std.concurrency module.
Pure functional code can be enforced by the compiler by using the 'pure'
qualifier. There is even compile time function evaluation if called with
constant arguments, which is awesome when used with it's type-safe generic and
meta-programming.
There's unit tests, contracts, invariants and documentation support right in
the language. Plus the compiler does profiling and code coverage.
I'd be curious to test D against Erlang for such a system. (Not saying Erlang
shouldn't be used, it's the next language on my to-learn list, just that the
switch to functional might be too radical for most developers used to
imperative and OO and D provides the best of both worlds.)
~~~
misnome
I've been interested in D for a while, for these reasons and more - it's
features look nice, but it never seems to have gotten much
popularity/mindshare. Could you hazard a guess why?
~~~
pjmlp
In the early days there were some issues in the community which lead to the
Phobos/Tango divide in the standard library for D1.
This is now past history as the community has joined around D2, just known as
D, and strives to reach compliancy with the "The D Programming Language" book
written by Andrei Alexandrescu.
D2 development is made in the open with source code available in GitHub.
Besides the reference compiler, dmd owned by Digital Mars, there are also LDC
and GDC compilers available. Currently it seems that GDC might be integrated
into GCC as of 4.8 release.
Right now it seems more people are picking up D, mainly for game development
projects.
~~~
CJefferson
Indeed, the D1 splits, and and also questions being answered with "that will
be in D2". There is now a problem that there is not yet a consensus about how
D2 compares to C++11.
------
sausagefeet
Does anyone have any knowledge of why Ada isn't used over C? Specifically, it
seems like Ada gives you a lot better tools when it comes to numerical
overflows/underflows.
Also, what compiler does NASA use? Something like CompCert? What kind of
compiler flags? Do they run it through an optimizer at all?
~~~
vbtemp
See my post below - to reuse code cross platform. There's a diverse set of
compiler toolchains, operating systems, architectures. Only ANSI C is
supported by all of them. The compilers are specific to the target OS and
hardware, and flags are unsurprisingly the strictest possible for C89.
~~~
ibotty
i'd think that you can run ada generated code pretty much everywhere. even on
obscure hardware that works in space.
------
davidw
Great article and comparison, and a nice way of highlighting one of Erlang's
strengths.
However: I'm dubious that it's a strength many people here need. No, the
article did not say anything about that, but I am. A few minutes of downtime,
now and then, for a web site that's small and iterating rapidly to find a good
market fit, is not the biggest problem. And while Erlang isn't _bad_ at that,
I don't think it's as fast as something like Rails to code in, and have all
kinds of stuff ready to go out of the box.
That said, I'd still recommend learning the language, just because it's so
cool how it works under the hood, and because sooner or later, something will
take its best ideas and get popular, so having an idea how that kind of thing
works will still be beneficial.
~~~
timClicks
As you mentioned Rails, I thought I should mentioned Chicago Boss. It's a
blindingly fast Rails-inspired framework that takes many Erlangisms out of
coding in Erlang: <http://chicagoboss.org/>
------
DanielBMarkham
Message-passing better than STM? Wonder why?
~~~
matthavener
I think two reasons: 1) VxWorks directly supports message passing
(<http://www.vxdev.com/docs/vx55man/vxworks/ref/msgQLib.html>). 2) They seem
to prefer simple, obvious, "less magic" interfaces. STM is nice for its
"magic", but message creates very well defined, documented interfaces between
code.
~~~
deno
> STM is nice for its "magic", but message creates very well defined,
> documented interfaces between code.
The author’s previous post has a good overview of how message passing
contributes to that as well.
[http://jlouisramblings.blogspot.com/2012/06/protocols-in-
kin...](http://jlouisramblings.blogspot.com/2012/06/protocols-in-kingdom-of-
distribution.html)
------
thepumpkin1979
deno, I was wondering, if so similar to erlang, why not use erlang instead C?
What is the major drawback, footprint?
~~~
malkia
The OTP virtual machine takes a lot of memory. It's an interpretter, which
means much slower execution, as article and others above pointed - The Erlang
VM is soft-realtime, it can't guarantee that something would finish in certain
amount of micro or milliseconds, or if it does guarantee - it's too much for
what they need (just guessing here).
But the concepts are very similar - message passing being the way to
communicate between modules, rather than shared memory ways.
This brings another topic - the Linux vs Minix debate :) - I guess there are
right things to be done for the right time, and right target. It's just
getting all these things right is the hardest.
~~~
deno
> It's an interpretter, which means much slower execution
Only BEAM is an interpreter, there are HiPE and ErlLLVM backends as well. You
can also write NIFs — functions in C that can be executed within VM.
------
ricardobeat
So a Mars Rover is much closer to a browser/backbone/node.js app than I could
ever imagine. The basic structure is surprisingly similar to javascript apps
these days: isolated modules, message passing/event loop, fault tolerance.
~~~
jlouis
Node.js is cooperatively multitasked. VxWorks (and Erlang) are preemptively
multitasked. So the basic structure is quite different. If one of your node.js
events infinite loops, it is game over. Be it web server or rover. Not so
here.
~~~
jeremiep
Node.js is preemptive, there's only one thread running JavaScript but multiple
C++ worker threads doing work on behalf of the script.
vibe.d on the other hand is cooperative since it uses coroutines for
concurrency.
~~~
malkia
Please, explain what do you mean by that - "pre-emptive"?
My understanding (coming from C and OS terms) is that pre-emptive means taken
over. e.g. if I have a real OS thread it is being temporarily "paused" and the
resources (CPU/mem/io) are given to something else. At some point control is
restored back.
But this is without the knowledge or instructions from the thread itself. So
things like priority inversions are hard to battle with pre-emptive
multitasking - for example thread A with low priority holding a mutex, while
thread B with higher priority waits for it. (and no need for mutexes, if only
message passing is to be used).
~~~
jeremiep
The node.js worker threads are native threads, which are preemptive on all
current platforms. The JavaScript context is running an event loop which most
likely must perform locking on its message queue and callbacks to async
operations are queued for execution on a future tick of this event loop. All
of this seems very preemptive to me.
What seems like cooperation in node.js is really just async operations queuing
up on the event loop. Since requests are also async events, they get
interlocked with callbacks from existing requests.
To me, cooperation is when you yield the thread to another coroutine. This
saves the state of the call stack, the registers, everything; meaning you
don't force your user to keep that state in closures. The user code in a
cooperative environment feels sequential and blocking and results are passed
by return values, not by calling continuations.
Its also friendlier to exceptions since it doesn't lose the entire call stack;
with node.js you only get the stack since the beginning of the current event
loop's tick.
~~~
deno
> The JavaScript context is running an event loop which most likely must
> perform locking on its message queue and callbacks to async operations are
> queued for execution on a future tick of this event loop. All of this seems
> very preemptive to me.
There’s a single loop which blocks until the task _yields_ while waiting on
the result from one of the worker threads. That’s cooperation. All queued
connections are starved until that happens. In Erlang, or just with pthreads,
the connections are processed independently. Think separate event loops for
each connection.
> To me, cooperation is when you yield the thread to another coroutine. This
> saves the state of the call stack, the registers, everything; meaning you
> don't force your user to keep that state in closures. The user code in a
> cooperative environment feels sequential and blocking and results are passed
> by return values, not by calling continuations.
That has noting to do with how execution is scheduled. Cooperative scheduling
requires passing continuations[1], so the execution can be resumed while it
waits. The simplest implementation is to use callbacks, the way node.js does
it. Futures and deferreds[2] are a little bit more sophisticated (Python’s
Twisted, probably something for node.js exists as well), as they allow for
better composition. And of course you can hide the continuations entirely,
which can be done in both Scala (compiler plugin) and Python (gevent or using
generators), rewriting the direct control flow by breaking it on yield points
automatically (this is how exception throws work in most languages btw), but
the limitations inherent in having a single event loop per thread will still
exist.
[1] <https://en.wikipedia.org/wiki/Continuation-passing_style>
[2] <https://en.wikipedia.org/wiki/Futures_and_promises>
~~~
ricardobeat
> a single loop which blocks until the task yields while waiting on the result
> from one of the worker threads
Yes, node.js is cooperative, yet since all I/O is asynchronous the time spent
blocking is mostly dispatching and simple operations, it doesn't block while
waiting - that's where it's performance and high concurrency comes from. Doing
CPU-heavy work in the server/main process is a no-no.
~~~
deno
Obviously that approach is fine enough for many things. Before node.js, people
have written those kinds of servers in Twisted or Netty, with great results.
Netty based framework powers, for example, much of Twitter. I was just
explaining how the scheduling works :)
|
{
"pile_set_name": "HackerNews"
}
|
The scored society: due process for automated predictions [pdf] - anigbrowl
https://digital.law.washington.edu/dspace-law/bitstream/handle/1773.1/1318/89WLR0001.pdf
======
tacon
I took Caltech's "Learning from Data"[0] (machine learning) MOOC a couple of
years ago. Of course, one of the classic ML applications is loan approval. In
one lecture, Prof. Abu-Mostafa mentioned he was a consultant to a large
financial organization and his project built a successful loan selection
system. But then the organization's CEO asked "Why did you turn down these
loans? Under the Fair Credit Reporting Act, we have to tell the applicants why
they were rejected." Of course, he couldn't say "because that applicant was
not on a high enough peak in a 10,000 dimension space." A question came back
from the audience, "What did you end up doing?" and the professor told us,
sheepishly, "I can't tell you." And so it goes...
[0]
[https://work.caltech.edu/telecourse.html](https://work.caltech.edu/telecourse.html)
~~~
dragonwriter
Seems to me that you ought to be able to apply simpler techniques than those
used to determine whether the loan was approved or denied to determine a set
of changes that would have caused the application to be approved, and use that
to report a set of reasons that the application was denied.
|
{
"pile_set_name": "HackerNews"
}
|
Improving Angular performance with 1 line of code - lolptdr
https://medium.com/@hackupstate/improving-angular-performance-with-1-line-of-code-a1fb814a6476#.f94wuqq7d
======
RKoutnik
As someone who's specialized in AngularJS optimization [0], this doesn't
surprise me in the least. Developers leave plenty of performance optimizations
on the table, some worse than this one. The truth of the matter is that in the
general case, it doesn't matter. We engineers like to go on and on about this
or that perf efficiency but most of the time the code runs fast enough (and
the majority of the time when it doesn't, ads are to blame over framework
tweaks).
Heck, I shouldn't even be saying this as it'll cut into my very lucrative
field but unless you've got a confirmed performance problem, you shouldn't
think about it at all. Really. Leave all of the fancy perf tweaks behind.
Sure, you might get ridiculed on Medium but that's par for the course here on
the internet. I'm afraid articles like these will cause lots of cargo-cult
'optimization' that just makes the job harder for future devs.
Make it _work_ , make it work _well_ , THEN make it work _fast_.
[0] I wrote Batarang's new perf panel before Google abandoned the project
~~~
dmitrygr
>>The truth of the matter is that in the general case, it doesn't matter. We
engineers like to go on and on about this or that perf efficiency but most of
the time the code runs fast enough
THIS!
This is why battery life on mobile sucks
everyone who thinks this way is the reason
.
As everything moves to the web, the performance of your code matters
.
So what if my phone has just enough CPU to render your site? You should not be
content with that - you should strive to use less, so my CPU can go to sleep
faster.
.
Your job is _supposed to be hard_. you're an engineer! Please consider the
consequences of your actions.
~~~
andybak
> Your job is supposed to be hard. you're an engineer! Please consider the
> consequences of your actions.
My clients pay my wages and they are the ones who choose what my priorities
should be. Where are these mythical developers who get to choose how long to
spend on optimising working code without any financial imperatives? Outside of
hobby projects there's always someone counting the pennies.
~~~
verytrivial
I agree, but this is a moral imperative, not a financial one. And before you
say "it's immoral to spend the client money on things they don't want", you're
right. Your job is to convince the client it is "worth" it.
~~~
746F7475
So, what have you done to improve the situation?
------
blinkingled
This is a case of bad defaults - why should a developer be required to write
code, even if it is a one liner, to _disable_ debug? It should be the other
way around - you write a single liner to enable debug.
~~~
megablast
Just another reason to hate angular.
~~~
__derek__
That's not just Angular. From the React README[1]:
> Note: by default, React will be in development mode. The development version
> includes extra warnings about common mistakes, whereas the production
> version includes extra performance optimizations and strips all error
> messages.
> To use React in production mode, set the environment variable NODE_ENV to
> production. A minifier that performs dead-code elimination such as UglifyJS
> is recommended to completely remove the extra code present in development
> mode.
[1]:
[https://www.npmjs.com/package/react](https://www.npmjs.com/package/react)
------
gamache
I'd like to see some data indicating there is actually an appreciable
performance difference. Indeed, with only one line of code to add, the setup
cost is low, but this article asserts a performance gain without any evidence.
~~~
aaronbrethorst
The official Angular docs describe it thusly:
You may want to disable this in production
for a significant performance boost
[https://docs.angularjs.org/api/ng/provider/$compileProvider](https://docs.angularjs.org/api/ng/provider/$compileProvider)
I'm inclined to believe the engineers who wrote the docs on this.
~~~
Bahamut
A Twitter engineer mentioned this in a lightning talk in the AngularJS SF
meetup a little more than a year ago - he mentioned that he found speed gains
of about 33% with that one line.
It's a great tool to have in your belt for optimizing angular apps, most
people working with angular are not aware of this unfortunately.
------
Zekio
Shouldn't debugging be something you opt-in to rather than opt-out of?
Seems like a weird design choice
~~~
mkolodny
Django has DEBUG set to True by default, too.
~~~
Lxr
They have a fairly obvious _" SECURITY WARNING: don't run with debug turned on
in production!"_ above it though.
------
nrub
Looks like it's the first suggestion in their production developer guide,
[https://docs.angularjs.org/guide/production](https://docs.angularjs.org/guide/production)
------
jsumrall
I've searched a few minutes for any blog post or article which demonstrates
the performance difference between turning this setting on or off, and I
haven't found anything.
If you don't have a benchmark, you don't have a performance boost.
~~~
dimgl
This is a silly statement. Not everything is placebo. I'm working on a bigger
Angular application and adding this one line of code made a small perceivable
difference.
~~~
rrdharan
Did you do a double blind test or use any instrumentation to confirm your
perception? I ask as someone who has more than once been convinced that I'd
made something faster only to prove myself wrong after measuring it.
------
mangeletti
Wow, I did not know that everyone else _also_ right clicked to "inspect
element" all over the place, for no reason...
I love moments like this.
~~~
melvinmt
It comes in quite handy when you're dealing with sites that attempt to hide
content behind a paywall, with just a front-end solution...
#nevertrusttheclient
~~~
imjustsaying
That still works? Cool.
Off the top of your head, do you remember any examples of sites that expose
paywalled content like this?
~~~
mangeletti
Not a _pay_ wall, but Quora requires login, and you can just remove the
overlay and the "noscroll" body class.
------
sker
Dude should have made a small fortune selling optimization services to those
big corps.
~~~
hinkley
There was a 3 job string in the late nineties where the first impressive thing
I did was go fix the log levels of the code and get transaction times down by
50%. Sometimes the simplest tricks are the best.
I've also been known to take a couple minutes off a build process or fix
something that is annoying everybody to break the ice.
------
aaronbrethorst
Does Angular 2.0 make this _one weird trick_ unnecessary?
~~~
jakegarelick
This article reminds me of those X Hate Him! ads.
[http://i1.kym-
cdn.com/photos/images/newsfeed/000/633/254/14f...](http://i1.kym-
cdn.com/photos/images/newsfeed/000/633/254/14f.jpg)
~~~
cookiecaper
I agree. He kept repeating "this one line of code that improves Angular
performance!" like he's trying to optimize for SEO on the term "improve
angular performance".
------
justinsaccount
I see this more as a tooling failure. All the
yeoman/grunt/gulp/webpack/boilerplate/starterkit BS that you need to do to get
a minified/concatenated/compressed build deployed and none of that turns this
option off?
The only way I was ever able to get an angular project built/tested/deployed
was by using something like yeoman. It's kind of shocking that the default
project templates set up everything that you need except for this one thing.
I do notice that the made with angular site isn't even minifying their code:
[https://www.madewithangular.com/static/js/main.js](https://www.madewithangular.com/static/js/main.js)
Though that may be intentional so that people can see how it is written.
------
aeontech
Yeah, this is the hidden cost of not choosing sane defaults. If you're not
sure what "sane" is, default to "most likely/common use case". It's good to
hear this is changed to opt-in in Angular 2 though.
------
artursapek
Slow websites hate him!
~~~
hiimnate
Check out this one weird trick to making your websites 20% faster!
~~~
melvinmt
Line 17 will make you furious.
------
joesmo
What the author discovered is really shitty defaults for Angular. Most
software is like that. Don't blame the developers using the software, blame
the Angular developers for choosing shitty defaults. Instead of choosing
production ready defaults, they chose development ready defaults, creating
more work for everyone.
~~~
enraged_camel
>>Don't blame the developers using the software, blame the Angular developers
for choosing shitty defaults.
I like to blame developers who don't read the friggin documentation on the
software they are using.
This particular suggestion is literally the very first one in the developer
guide for running Angular in production:
[https://docs.angularjs.org/guide/production](https://docs.angularjs.org/guide/production)
~~~
joesmo
Oh please. There are thousands of Angular guides out there. Some people don't
use the official documentation, especially when it's as shitty as Angular's
is.
------
seanwilson
I've played with this setting on a few projects and didn't notice any big
difference so I'd like to see benchmarks for it. I reasoned that if it's not
significantly slowing the code down and could be used for easier debugging
later I didn't see the harm leaving it in.
------
azernik
Less link-baity title - "Using development options in production is bad for
performance."
------
acconrad
Yeah this seems more like something that Angular should be doing, rather than
the end user. If you're in a production environment Angular should enable this
for you.
------
roncohen
Great trick. Incredible that it's not on by default.
_if_ you've applied that trick and still find yourself wondering what's
taking up time, we just released a performance monitoring tool. We're looking
for feedback, so please let me know how you find it:
[https://opbeat.com/angularjs](https://opbeat.com/angularjs)
------
teknologist
I think another question to ask is why calling angular.element().scope() is
considered use of "debug info"
------
shanselman
Forgive my ignorance, but doesn't the Angular framework
* have this off by default and you have to turn it ON in prod? * have it cause a Console.Log("DEBUG IS ON Y'ALL");
Either way, I mean, there's ways to give folks the heads up, right? It seems
odd to have to ADD this in Prod.
------
inquire
Protractor also works with debugInfo turned on from v 3.1.0
[https://github.com/angular/protractor/blob/master/CHANGELOG....](https://github.com/angular/protractor/blob/master/CHANGELOG.md)
------
matharmin
If you're using Ionic 1.x, the debugging info is forced to enabled, since some
of Ionic's internals depend on this for some reason. Haven't tested with 2.x
to see if they've improved this yet.
------
Pxtl
Nobody will write that one line until somebody says that it's time to make
performance a priority.
------
awqrre
This might have went unnoticed to most if the web was binary blobs...
------
wintersFright
tried this but it broke the angular-datatables directive
------
wintersFright
tried this but broke the angular datatables directive...
------
nathancahill
Improve Angular performance with 1 line of code: rm -rf /
------
partycoder
TL;DR
------
natvert
this. is. awesome. (and sad that i didnt know this already)
------
iLemming
Whatever. I'm using Clojurescript. I'm happy. When I used angular - I wasn't.
------
Alex3917
Flagged for being written as SEO spam, which is unfortunate because the
content is actually useful.
|
{
"pile_set_name": "HackerNews"
}
|
Microbots Are on Their Way - smb111
https://www.nytimes.com/2019/04/30/science/microbots-robots-silicon-wafer.html
======
raybon
20 years ago, when I was a bright eyed graduate student, I was mesmerized by
MEMS (micro-electro mechanical systems), which promised similar revolution. I
learnt about 'artificial muscles', and MIT Technology Review even ran a cover
issue on how MEMS will revolutionize everything. This article could have been
written in the year 2000 except it would have mentioned MEMS then. Now there
is no mention of it. I'm older and saner now. Still feels very much a academic
pipe dream than real engineering. I dreamed of working with Kris Pister and
now he is 20 years older. Another young professor at UPenn and Cornell is
trying to get tenure....call me cynical but this too shall pass. Issues of
toxicity in human body etc are huge...
~~~
Espressosaurus
MEMS have been much quieter in their influence as it turns out. Now they're in
everything with an IMU or accelerometer, especially including things like your
phone; they're in disposable pressure sensors; and they're in microphones[1].
They've revolutionized some parts of how we live our daily lives, though not
in the same way innovations like the car, airplane, computer, or cellphone
have.
I expect microbots will be similar. After a decade or two of hard work and
billions of dollars invested, they will quietly revolutionize some other small
parts of our lives. Meanwhile, the rest of the world moves on.
[https://en.wikipedia.org/wiki/Microelectromechanical_systems...](https://en.wikipedia.org/wiki/Microelectromechanical_systems#Applications)
~~~
flyinglizard
MEMS is also used for laser beam steering (depth sensors, projectors),
oscillators and even loudspeakers. MEMS is truly a breakthrough where physics
meet electronics.
Commercially MEMS is also very interesting because it’s a branch of
semiconductor manufacturing which is dominated by different players compared
to the regular TSMC/Samsung/Intel trifecta.
~~~
mikeash
Some devices use MEMS oscillators instead of crystals, which has the bizarre
side effect of making them allergic to helium:
[https://ifixit.org/blog/11986/iphones-are-allergic-to-
helium...](https://ifixit.org/blog/11986/iphones-are-allergic-to-helium/)
------
colordrops
I feel like I've been seeing stories like this, with accompanying microscopic
footage, for decades. Has anyone ever built anything more complex than a
simple actuator? These things seem to barely qualify as robots.
~~~
Hextinium
I think the main thing I always see with posts like these are that they always
say what could be done but with no plan or process to do any of it. The
problem with microbots seems to be a similar problem to spaceflight currently:
we can make it and its possible but no current commercial opportunities exist
to push the technology past the "we can do it" stage.
~~~
skybrian
Well, spaceflight is a big industry though. Or do you mean human spaceflight?
~~~
Hextinium
Anything outside of telecommunications or earth observation. Human spaceflight
is a different can of worms with the additional dilemma of that no large
hardware failures can occur. Rockets fail a lot, people dying is bad for PR,
and thus you need a really good reason to send a person. The thing then is
what do you do with people in space that makes enough money to warrant it?
Basically nothing. Thus the lack of any human spaceflight other than research
purposes.
~~~
opportune
I think theoretically, you could make money refining rare minerals from
asteroids (like platinum, gold, rare earths). The issue is that since the
startup cost is so high, the mission is very risky (conservatively, it would
probably cost in the 10s of billions to be able to refine anything at scale)
and 10s of billions worth of rare minerals won't stay worth 10s of billions
since the world only needs so much minerals - the price would fall
substantially.
I think the only thing really worth doing in space, economically speaking,
will be energy related. Maybe harvesting helium 3 will be lucrative. Maybe (I
doubt it) there will be a profitable way to harness solar energy - could be
more profitable if we can produce the panels in space by the asteroid they are
procured from, but then the issue is transmission.
Actually, there is another thing. I think space tourism could be lucrative.
Imagine if you set up a lunar colony for 20b that could house ~1k people with
about 200 permanent staff. If 800 people are paying $100k/week (plus cost of
transportation) to stay up there, you're making $4b per year. Personally if I
were worth in the ~10s of millions I would absolutely shell out $100k to spend
a week on the moon so I think this kind of thing could work
~~~
RickSanchez2600
It is still only a playground for the rich and wealthy. The Space industry
needs to become like the Airline industry in minimizing the accidents and
making it safe and cheap enough for the average person to afford.
There are people who want to go into space to be like Captain Kirk or some
other scifi character. Some of them are rich or wealthy and can afford the
$100K a week on the Moon.
Thing is a moon-base has to avoid meteorite strikes and other hazards. If the
water or oxygen gets contaminated that's it for everyone.
------
OrgNet
I really like the slaughterbots:
[https://www.youtube.com/watch?v=HipTO_7mUOw](https://www.youtube.com/watch?v=HipTO_7mUOw)
~~~
User23
Interestingly, black powder era technology easily defeats "stochastic motion."
It's called a shotgun.
~~~
OrgNet
Can you defend yourself from a swarm of 10 coming from all directions at
30mph?
~~~
User23
With 10 shotguns using the same AI? Why not?
------
pazimzadeh
This is fantastic, but another approach would be to use the things that are
already small and learn how they work, and then maybe reprogram them (i.e.
bacteria).
Also, as Richard Feynman pointed out
([https://www.youtube.com/watch?v=4eRCygdW--
c](https://www.youtube.com/watch?v=4eRCygdW--c)) at small scales water is
thick like honey so it's probably more efficient to use a rotating turbine
mechanism (i.e. flagella) for propulsion rather than trying to shrink paddles
down to micron sizes.
And almost foreign material that you put in your body will eventually be
covered in bacterial biofilms. So might as well learn how to program and
control the bacteria in the first place.
~~~
dmix
From the article:
> Challenges remain. For robots injected into the brain, lasers would not work
> as the power source. (Dr. Miskin said magnetic fields might be an
> alternative.) He wants to make other robots swim rather than crawl. (For
> tiny machines, swimming can be arduous as water becomes viscous, like
> honey.).
------
Causality1
I feel like I've been reading this article every five years since 1990.
------
_bxg1
I wonder what kind of "brain" would even be possible to put in a robot this
small, given the physical limit we're approaching. Could limit the potential
for certain dystopian scenarios.
~~~
leggomylibro
The ones in the article look like they're basically photocells which get
activated by lasers to move the 'legs'.
At that scale, you probably couldn't have much battery power either. Maybe
it'd be possible to power a small microcontroller off of radio signals which
also send instructions. They wouldn't need to be very "smart" as long as
something else in the room was, like a phone or router or something.
~~~
_nalply
When I looked at the animated picture I wondered about the flashing dots on
the circuit till I realized that these are probably reflections of the laser
beams directed onto the microbot.
------
danmaz74
This looks like the most important point:
> Dr. Miskin worked around the power conundrum by leaving out the batteries.
> Instead, he powers the robots by shining lasers on tiny solar panels on
> their backs
------
ducttape12
I've played the Metal Gear Solid games enough to know the crazy possibilities
of nano machines.
------
DEADBEEFC0FFEE
"could one day", beware these three word in a title.
------
higgy
I can't tell if it's cute or absolutely terrifying.
~~~
cellular
Meanwhile, [https://youtu.be/vOLvFhdkLmA](https://youtu.be/vOLvFhdkLmA)
------
mensetmanusman
Flying microdrones have obvious applications (entertainment, replacing
fireworks, on demand traffic guidance, etc.), but ground based versions are a
mystery to me...
------
User23
Does anyone else look at the microbot next to the paramecium and see an Atari
sprite next to a photograph?
------
nixarian
Or maybe we could inject them into the brain, and then could 'override basic
autonomic function. Maybe we could use this on a recently dead body and it
could do simple things, like amble around, maybe grab things, or bite.
|
{
"pile_set_name": "HackerNews"
}
|
On depression, privilege, and online activism - kencausey
http://mainisusuallyafunction.blogspot.com/2014/06/on-depression-privilege-and-online.html
======
Exenith
This guy is completely innocent and has done no wrong, yet he feels so guilty
and tip-toes around everything, just because he has white skin and a penis.
That's can only be the result of a movement that is sexist and racist. That's
the issue people have.
But I hear you. "He's privileged!" The thing is, he has no control over the
privilege that others give him. And no one should feel guilty for something
they do not ask for, do not do themselves, and do not have any control over.
The fault with privilege is in the people who give it to others. The only way
to fix such a thing is for all of society to stop giving favors to people
based on their gender, race and sexuality. What a fucking shame it is that
these SJWs are doing exactly the opposite of that.
This is how the status quo of "social justice" has transformed:
Past: "No one should receive positive or negative treatment based on
superficial factors, we are all of equal worth"
Present: "White men are all better off, therefore let's treat them like shit
and do lots of favours for everyone else to even it out"
If that sounds like a positive transformation, and not a regression, then I am
quite concerned. One seems to be an enlightened response, the other seems to
be at the level of a teenage middle child.
~~~
coolfuzion
The privilege of entering the lifeboat last...
------
dangerousthere
It's a lesson that doesn't seem to get learned, even after decades of research
saying that people start internalizing negative stereotypes, 'reasons' they
have been abused, and statements indicating low expectations of them. When
people hear 'math is hard', many people don't even try to learn it. When
people who consider themselves part of 'group X' hear 'group X is bad at
whatever', they don't do 'whatever'. And when 'group Y' hears that they won't
amount to anything, many just give up trying. It's destructive, it's horrible,
and it is real.
There are a few important things to realize here. Some of those who say these
horrible things often don't understand the topics very deeply. As a former,
and still peripheral member of various social justice 'communities' (and I
quote that because the 'communal' part is often centered around unfocused
anger), I see and have seen so many people come to discussions with only
talking-point-level understanding who run with it (or are simply running on
'cached thoughts', rather than situational ones.) The other thing, which the
author likely knows well, is that in the Oppression Olympics, the author is in
the biggest loser group, so is going to be cared about the least. Despite
endless blog posts and pleas to end the OO, internecine attacks, ally-bashing,
purity tests, knee-jerk responses, and all the other wonders of the SJW world,
it goes on. I don't blame the author for exiting. It's part of what made me
sideline myself, and many other former dedicated members of these communities.
------
petercooper
_At least I 've mostly stopped using Twitter._
This is absolutely a good idea. As a gung-ho advocate for Twitter for several
years, the service has become a cesspit of mud slinging, public shaming, and
rapidly bringing together people for crusades rather than somewhere opinions
can be discussed sensibly and processed over time (140 characters + real time
== ripe for gut reactions and basal instincts, unlike blogging or video).
A 'lite' alternative I've adopted in the past week is to only read and post
but to totally ignore the "Notifications" tab (so you see no @replies). That
way you get the best bits but without exposure to most of the toxicity.
~~~
GuiA
The company you keep...
I follow mostly academics, researchers, and people who tend to ascribe to the
"talk is cheap, show me the code" mentality. My Twitter experience is very
positive (and when someone gets a little too happy with inflammatory tweets,
the "mute"/"unfollow"/"disable retweets" buttons are very easy to find).
~~~
petercooper
This is very true. Unfortunately a lot of the more prolific open source/Web
people I need to follow due to my work like jumping onto bandwagons or raise
pitchforks on social matters. The people I'd follow if I were just using
Twitter for fun would be very different (and indeed, having realized that, I
started a new account last night that's entirely for my own gratification
:-)).
------
orasis
I think a big part of the problem is that when people are angry, they resort
to shame as a tool of retaliation. Instead of saying "Dude, what you said was
uncool." they say "Dude, you are a piece of shit."
I think Brene Brown explains very well the importance of guilt vs shame and
I'd like to see greater awareness of this distinction in conversations about
"isms":
[http://www.ted.com/talks/brene_brown_listening_to_shame](http://www.ted.com/talks/brene_brown_listening_to_shame)
~~~
AnimalMuppet
But in resorting to shame as a tool of retaliation, they become the problem
that they are angry about.
------
partomniscient
The author is suffering from unreciprocated empathy.
After a while of noticing the shades of grey, you come to realise that those
at the extremes see things as purely bivalent. i.e by their definition if
they're 'white' and your white has the merest hint of black in it, you're
against them Q.E.D.
Getting involved leads to a lose-lose self-reinforcing situation. Even opining
whilst on one's own turn will get you dragged in if you're noticed by one of
the extremists.
------
boomlinde
Getting depressed by hanging around with assholes is not very surprising. If
these are the people you get to meet in your online activism, it probably
helps to stop thinking of them as partial to your cause, especially if you
truly believe that there are more constructive ways for the movement to
manifest itself.
------
daktanis
extremists on my side of an argument make me far more unhappy than extremists
on the opposing side.
~~~
AnimalMuppet
I'll go a step further. Extremism on my side of an argument is more likely to
damage me (albeit unintentionally) than extremism on the opposing side.
|
{
"pile_set_name": "HackerNews"
}
|
Why I Am Not An Integrated Information Theorist (or, The Unconscious Expander) - altro
http://www.scottaaronson.com/blog/?p=1799
======
apl
I appreciate the notion of a "pretty-hard" problem of consciousness; it nicely
captures what people in the field, philosophers and scientists alike, are
_actually_ looking for.
His counterarguments appear sound, especially the technical concerns. (That's
not particularly exciting, though: formalisations of philosophical arguments
often crumble under the mathematician's lens. Even the best ones!) They're not
exactly new, I'd say, but their rigour is refreshing. My key problem with the
overall approach remains the shitty test set we have for any theory of
consciousness. It ultimately consists of two elements. If we're being
generous, there's three. Namely, almost everyone except for the philosophical
extremist agrees that we are conscious and that a stone isn't. IIT works quite
well for these cases. Some people would include, say, a cat as a conscious
being while maintaining that its "level" of consciousness is reduced. This
boils down to
C_stone < [C_cat <] C_human
which isn't much. There's no real hope for extending it. Any other cases like
the ones Aaronson discusses are ones for which we don't even have strong
intuitions. Sure, we'd like to think that none of the entities he describes
are in fact conscious. But if that's the bullet I have to bite in order to get
a decent theory of consciousness, then I might be OK with that.
~~~
Udo
There is little doubt that the cat is conscious, and the idea that
consciousness is a gradient with more than one dimension is not really in
dispute. In fact, you can build a very simple gradient model by observing
nothing but humans: a human baby is scarcely conscious at all, a toddler is
pretty much on par with the cat, and a grown human has a higher level still.
The reason why we think this is true is based on at least two things:
awareness and meta-awareness (which includes the ability to reason about one's
own existence as well as the world in general).
The cat and the toddler are clearly conscious in the sense that they do have a
recognizable life experience. They don't just react to the environment, they
make models of it, and they have a limited ability to abstract observations.
They experience life, and they have complex dreams and emotions. However, they
still lack a deeper understanding of causalities, and they can grasp the
subjective realities of other beings only on an instinctive level. Adult
humans are less limited in this regard, but it's easy to imagine hypothetical
beings who perform better still.
That's bad news for the validity of consciousness as a concept, since it seems
to be interlinked with intelligence, which itself is a fuzzy idea at best.
So these observations amount to the position that "consciousness" isn't a
fundamental property at all, it's the result of other processes and it can
come in different flavors. Likewise, the concept of "intelligence" is
similarly flawed. In both, we tend to make the mistake of looking at them as
simple scalar values. They're not. They're a name we give for a collection of
different capabilities. As such, I believe a mathematical model of
consciousness is actually pretty unscientific, since consciousness is not an
objective property. It's an artificial label we're obsessed with.
~~~
apl
That's not the notion of consciousness we're dealing with when talking about
the "hard" problem of consciousness. You're describing its psycho-functional
interpretation: the ability to see yourself as a "self".
Qualia are a largely orthogonal issue. Specifically, it's imaginable that an
entity we'd consider unintelligent has a rich and detailed subjective internal
life.
~~~
dragonwriter
> Specifically, it's imaginable that an entity we'd consider unintelligent has
> a rich and detailed subjective internal life.
Yes, just as its possible that an entity we'd consider intelligent _doesn 't_,
but since it's subjective, its not subject to empirical -- and, hence,
scientific -- verification. Anything we are going to say objectively on issues
_related to_ consciousness isn't going to address that, because that's a
question outside the scope of science.
~~~
KC8ZKF
It's ontologically subjective but epistemologically objective, in the sense
that we can ask questions and observe behavior. Pain is a good example.
------
byerley
I'd argue that the author's views are too concerned with intuition. If we
reduce consciousness to a scale rather than a binary property; of course a
thermostat is somewhere on the scale, it simply has such a small value that
it's not worth considering - leading to our philosophical intuition. There's
nothing magical about our level of consciousness; as unintuitive as that may
be. Aaronson argues that the model must yield to our intuition, but if the
model is consistent and explains our observations, our intuition should yield
to it. - the obvious analog here being quantum mechanics
In regards to saying "both that the Hard Problem is meaningless, and that
progress in neuroscience will soon solve the problem if it hasn’t already,"
neuroscientists and mathematicians too often overlook the Turing test here.
It's consistent (if not accurate) to say that the "Hard Problem" is ill-
defined, but maintain that we've clearly solved it if we can comprehensively
beat the Turing test. That's what the Turing test was designed for, knowing
that we've created a conscious machine because it's indistinguishable from a
human; even if we can't decide on what consciousness is.
------
alokm
I have had the fortune of working on this IIT right after my IIT :). I worked
on implementing the research software used by Giulio Tononi and his research
team. I added a visualizer in OpenGL, and optimized the calculations for
calculating integrated information.
------
tunesmith
Pretty dense article for what seems to be a really daffy definition of
consciousness, so maybe someone can summarize? It seems to just be the
difference between systems thinking and reductionism, but why would any
irreducible concept be evidence of consciousness? Why on earth would that idea
make any more sense than, say, a math problem that is too hard for a 4th-
grader being evidence of consciousness? Or a locked machine, or a patented
process, etc? There's nothing intrinsically special about an irreducible
process other than it being irreducible - it's not like it's mystical or
anything.
~~~
gone35
Close. Aaronson's most devastating argument (he offers two more) is that
Tonini's \Phi or "integrated information" of an input-output system
(function), under a reasonable operationalization as a measure of how
correlated subsets of inputs are with subsets of outputs, is just not good
measure of "consciousness" because, as it happens, large families of rather
mundane systems that one wouldn't think as "conscious" in fact have _provably
large_ "integrated information" or \Phi by design --such as Reed-Solomon
codes, for instance.
Put in another way, if Tonini were right then your (say) portable DVD player
ought to be "conscious" because, surprisingly, the amount of "integrated
information" achieved by the scratch- and skip-tolerant error-correcting code
it uses internally has to be _huge_ in order for it to work. But that is
_prima facie_ ridiculous; so either we are wrong and the DVD player is in fact
"conscious" or, more likely, Tononi's proposed definition of consciousness is
lacking.
Again as I said Aaronson makes two other points, but I think this one alone is
the most conclusive.
~~~
logicallee
It's not _prima facie_ ridiculous that the DVD player might be conscious, as
let's assume it _is_ prima facie ridiculous - then it would be prima facie
ridiculuos in 1969 with a tape player, in 1979 with a VCR, in 1989 with a CD-
ROM drive, 1999 with a DVD-ROM drive, in 2009 with a Blu-RAY drive, in 2019
with x, in 2029 with y, and there is no reason it shouldn't still be prima
facie ridiculous even if at some point along the way the thing happens to be
conscious as a side-effect, without this being necessarily visible in its
outputs. So you can't just call it prima facie ridiculous and be done with it
- you need some other argument.
------
andyjohnson0
I wanted to like this, but IIT just seems like a lot of hand-waving.
_" to hypothesize that a physical system is “conscious” if and only if it has
a large value of Φ"_
By this measure, would a long mathematical proof or an orchestral symphony be
conscious? If so, how does this actually help us understand how subjectivity
relates (or doesn't) to all this?
[1]
[http://en.wikipedia.org/wiki/List_of_long_proofs](http://en.wikipedia.org/wiki/List_of_long_proofs)
|
{
"pile_set_name": "HackerNews"
}
|
Windows Timer Resolution: Megawatts Wasted - ceeK
http://randomascii.wordpress.com/2013/07/08/windows-timer-resolution-megawatts-wasted/
======
gioele
In the meantime, Linux has (by default) an adaptable timer and will soon be
fully tickless [1]. In other words there will be no fixed timer and the OS
will calculate when the next wake-up should be scheduled and sleep until that
time (or until an interrupt comes).
At the same time, PowerTOP [2] will nicely show you which programs or drivers
are responsible for waking up the computer and estimate how much power each
program is consuming.
[1] [https://lwn.net/Articles/549580/](https://lwn.net/Articles/549580/) [2]
[https://01.org/powertop/](https://01.org/powertop/)
~~~
kryten
Yet Linux 3.6 still lasts only 1h15m on idle compared to windows 7 at 4h45m on
my laptop. Linux has all powertop optimisations on as well.
YMMV
~~~
mikevm
What explains the abysmal power efficiency of Linux?
~~~
caf
On consumer hardware like laptops, it is generally that the driver for the GPU
is not able to put the hardware into its low power modes.
The power efficiency tends to be very good on server-class hardware (because
large corporate users like Google tend to care a lot about it).
~~~
aschampion
AMD has submitted dynamic power management for the 3.11 kernel, so hopefully
this efficiency gap will start closing:
[http://www.phoronix.com/scan.php?page=news_item&px=MTQwNTU](http://www.phoronix.com/scan.php?page=news_item&px=MTQwNTU)
------
jcampbell1
This is an interesting post. jQuery was fixed to use 13ms as the minimum
animation interval a some time ago. This seems like a legit Crome bug to file
as the interval should be more deterministic. Chrome shouldn't take a 1ms tick
unless it really needs it.
I wonder how much javascript code uses setTimeout(x,0) to push code to the end
of the run loop.
~~~
evmar
Initially, Chrome attempted to allow setTimeout()s under the 15ms or so that
was standard across browsers, which led to it winning some benchmarks and some
accusations of foul play. The intent was pure -- why artificially clamp
JavaScript timers to a Windows quirk? -- but eventually Chrome was changed to
make timers behave like in other browsers. It appears that the spec now says
4ms is the minimum.
This bug (helpfully linked from MDN) has more of the story.
[https://code.google.com/p/chromium/issues/detail?id=792](https://code.google.com/p/chromium/issues/detail?id=792)
I remember the Chrome timer code of years ago was careful to only adjust the
interval when needed. From reading other bugs it looks like today's behavior
is an accidental regression and will likely be fixed (until the next time it
regresses).
~~~
ddeck
> I remember the Chrome timer code of years ago was careful to only adjust the
> interval when needed. From reading other bugs it looks like today's behavior
> is an accidental regression and will likely be fixed (until the next time it
> regresses).
Indeed, although it seems the current behavior has been oustanding for some
time:
[https://code.google.com/p/chromium/issues/detail?id=153139](https://code.google.com/p/chromium/issues/detail?id=153139)
The original justification for lowering the resolution is an interesting read:
_At one point during our development, we were about to give up on using the
high resolution timers, because they just seemed too scary. But then we
discovered something. Using WinDbg to monitor Chrome, we discovered that every
major multi-media browser plugin was already using this API. And this included
Flash, Windows Media Player, and even QuickTime. Once we discovered this, we
stopped worrying about Chrome 's use of the API. After all – what percentage
of the time is Flash open when your browser is open? I don't have an exact
number, but it's a lot. And since this API effects the system globally, most
browsers are already running in this mode._[1]
[1] [http://www.belshe.com/2010/06/04/chrome-cranking-up-the-
cloc...](http://www.belshe.com/2010/06/04/chrome-cranking-up-the-clock/)
------
tfigment
Interesting. I had clockres on my machine but never bothered to learn what it
does. I've used that in code that I wanted a better timer but ended up using
QueryPerformanceCounter/Frequency and rolling my own timer class but that can
be a bigger pain than just using the timer.
On my machine, I got similar settings and found chrome being the sole offender
which is probably the worst offender in many ways. Firefox and IE were clean
so Google is the outlier and given I always have Chrome browser open somewhere
while SQL or devenv is not always open, I suppose that's suboptimal wonder if
they will change it.
------
br1
Macs have a similar issue. Unexpected programs activate the dedicated GPU.
Skype and twitter used to do this. Power users run special utilities to force
the dedicated GPU off, but the normal user has no idea that his Mac battery
won't last.
In my PC the programs raising the resolution are gtalk, chrome and skype. I
run visual studio and sql server but they don't show up in powercfg.
quartz.dll is from DirectShow. A multimedia component is expected to requiere
more resolution. The fault is in the program calling into DirectShow.
~~~
jfb
Only if you have a discrete GPU, of course. The power management story on the
15" MBP is a bit of a shitshow; I expect that the Haswell updates will go
integrated-only.
------
dfc
_" Another common culprit on my machine is sqlservr.exe. I think this was
installed by Visual Studio but I’m not sure. I’m not sure if it is being used
or not."_
Is this attitude still prevalent in the windows community? I thought things
had improved on that front.
Its worth pointing out that "the highest frequency wins" is not an example of
"tragedy of the commons."
~~~
overgard
"the windows community". Huh. Do you really think such a thing exists?
(I'm just kidding; of course it does. We meet monthly and talk about ways to
snuff out free software, as is our way).
~~~
rictic
Um, what? Of course there's a windows community, in every sense of the word.
There are magazines, conferences, and forums for windows programming and
windows programmers. There are trends, fads, and innovations.
.. and there are practices that are commonly found on windows programming that
are beyond the pale in other environments (loading your app into memory every
time the machine starts, installing malware during setup, etc).
------
solox3
Like the author says, Megawatt is a measurement of energy wasted per second.
If we take his claim that 10 MW is being wasted, then the energy wasted at 10
MJs⁻¹ over a year is the energy of 5 small atomic bombs.
[http://www.wolframalpha.com/input/?i=10MW+times+a+year](http://www.wolframalpha.com/input/?i=10MW+times+a+year)
~~~
caf
I don't think "5 small atomic bombs per year" is a particularly relatable
example. It's about the average electricity consumption of 7500 US homes -
that seems more concrete to me. If you can save the equivalent of switching
off 7500 homes by fixing a bug in your software, that's a pretty big impact
for one person to make.
~~~
6d0debc071
Considering the amount of energy we use, it's barely a drop in the ocean.
~~~
pepve
A drop with a radius of 5.7 km...
[http://www.wolframalpha.com/input/?i=10MW+times+a+year+divid...](http://www.wolframalpha.com/input/?i=10MW+times+a+year+divided+by+143.851+PWh+times+1.3+billion+cubic+kilometres)
~~~
6d0debc071
I'm not sure whether you're agreeing with me or not, and in a way that's why
people shouldn't just use large numbers and then act like they've said
something meaningful.
Ratios matter, large numbers without a relevant basis for comparison on the
other hand are just misleading. There are roughly 116 million households in
the US, saving the energy of 7,500 is not a big change.
You're solving roughly 1 - 15,466 th of the problem. And that's assuming that
all the savings could even be applied to the US, which they most certainly
couldn't.
That's not a big impact. Chances are no-one - even if they were looking -
could notice the figurative needle move on a change that small at all the
power-stations serving the aggregated demand.
------
rfatnabayeff
Author provides source code using non-monospace (!) slanted (!!!) font. I
would have wasted a damn gigawatt to unsee that.
~~~
noptic
if(comment.onTopic)){
comment.post()
} else {
comment.delete()
}
|
{
"pile_set_name": "HackerNews"
}
|
Microsoft lifts GPL code, uses in Microsoft Store tool - vijaydev
http://www.withinwindows.com/2009/11/06/microsoft-lifts-gpl-code-uses-in-microsoft-store-tool/
======
ramchip
The comments quality on this blog is terrible.
That being said... the GPL violation is not yet 100% clear. The author
suspects it from having decompiled the Microsoft software, but the code is
apparently ported from another LGPL tool, so either Microsoft took it from the
LGPL tool and ported it (ending up nearly identical to the GPL code) or they
took the GPL code itself.
|
{
"pile_set_name": "HackerNews"
}
|
Virgin Galactic Unveils Design For SpaceShipTwo - jmorin007
http://www.techcrunch.com/2008/01/23/virgin-galactic-unveils-design-for-spaceshiptwo/
======
danw
_the ability to launch low-earth satellites that could literally take some of
the heat out of the planet, by serving as a repository for information
technology._
Reminds me of a very hypothetical idea I heard about a few years ago. Some
hackers wanted to create a hybrid communications satellite/web server that
could host content outside of any legal juristiction. Could you imagine the
RIAA trying to shut down that BitTorrent Tracker?
~~~
kirubakaran
While this is great for free speech etc, how can something that is out of
legal jurisdictions be defended? What if RIAA decides to point a high power
laser at it?
~~~
dcurtis
The RIAA has high powered lasers? I somehow doubt that line item would be
approved by the member companies.
~~~
kirubakaran
You are taking my example rather literally :-) My original question stands:
How do you defend something that is not protected by governments?
~~~
danw
Quite simply, you can't. You only get the benefits of no law and government
alongside the downsides of no infrastructure and protection.
I guess with your laser example other satellite owners, including governments
would be peeved due to the debris. This would likely lead to stricter rules
about space being drawn up.
~~~
daniel-cussen
Governments might start claiming space like they claimed coasts in the 50s.
------
hwork
It's interesting how much more captivating this is compared to anything NASA
is doing currently. The rovers are cool, the fly-bys neat, hubble upgrades,
etc. But this (and similar ventures) put real, albeit rich and crazy, people
into space and that's so... cool. It's tangible.
|
{
"pile_set_name": "HackerNews"
}
|
CNN: Online comments are on the way out - cft
http://www.cnn.com/2014/11/21/tech/web/online-comment-sections/
======
piwakawaka
Is this true? Are comments done for? Do they desire them in theory, if trolls
could be excluded? Can trolls be excluded?
"At CNN, comments on most stories were disabled in August. They are
selectively activated on stories that editors feel have the potential for
high-quality debate -- and when writers and editors can actively participate
in and moderate those conversations."
What happened in August?
"Editors and moderators now regularly host discussions on CNN's Facebook and
Twitter accounts."
Why are they not concerned about trolling on those sites? Do they just want it
siloed from their site? Is this about lawsuits or advertiser's concerns?
"Despite our best efforts to contain them, trolls are a persistent group and
keep managing to slip through the gates."
Was this simply a moderation problem? Is this solvable?
|
{
"pile_set_name": "HackerNews"
}
|
Guest Posts and link-building for B2B SaaS - cscane
http://www.synergy4saas.com
======
cscane
Comments or feedback? We're accepting beta users
|
{
"pile_set_name": "HackerNews"
}
|
Adobe Forging Ahead with Flash for the iPhone Despite Jobs’ Remarks - ciscoriordan
http://www.techcrunch.com/2008/03/18/adobe-forging-ahead-with-flash-for-the-iphone-despite-jobs-remarks/
======
bilbo0s
This is a mistake.
Firstly, if Flash on the iPhone is buggy, because Apple withheld its help,
users will blame Flash and word will go around to keep it off of your iPhone.
Secondly, if Flash on the iPhone is less capable than the desktop version, and
more capable than Flash Lite, developers will become irritated. Finally, if
people are making full 3D shooters and racers with the regular iPhone SDK, AND
making money off of it through the iPhone store, Flash developers will be in a
position of staggering disadvantage. This is because even if Jobs lets Adobe
make and distribute a Flash runtime, which is far from certain, he will
certainly not let Flash developers put their apps on the app store. Even if he
did, who would by a Flash app? In user's minds, Flash apps should be free,
remember. And once users make up their minds . . . well . . . talk to the
record industry.
If you think advertising will underpin it all, you should be paying closer
attention.
Flash has very little to gain, and a great deal to lose in user perception, as
well as developer perception. If Adobe chooses to go down this road, they MUST
execute flawlessly.
|
{
"pile_set_name": "HackerNews"
}
|
Why Hypercard Had to Die (2011) - tobr
http://www.loper-os.org/?p=568
======
edtechdev
At the bottom of the comments, a member of the Hypercard team said it was
killed in 92, before Jobs returned.
There still is no free development tool out there that matches what Hypercard
did in terms of power and ease of use. We're still coding with languages
designed for constraints that disappeared decades ago.
~~~
jayd16
I've never used Hypercard and I just can't fathom the love for it.
If nothing matches Hypercard, why don't people use it (or some opensource
copy) of it? So many seem enthralled by it but the reality is no one seems to
want to bring it back.
~~~
danaris
Well, the main reason no one uses Hypercard itself is because emulating
classic MacOS is a pain and a half.
I haven't explored all the copies, but the ones I have seen tend to be less
featureful and/or less user-friendly than the original.
Furthermore, probably the best modern software that's similar to Hypercard in
its overall flexibility and power is...the web. Back in the day, Hypercard was
_it_ ; there wasn't really anything else in that space. Nowadays, there's
still nothing that's quite as awesome or matches all its features, but there
_are_ things that are much closer and do a lot of the same things that are
much more accessible than emulating or reimplementing Hypercard.
~~~
cortesoft
Emulating Mac OS classic isn't particularly difficult these days. I use
sheepshaver with great success. I actually DO use it to run my old hypercard
games I wrote in elementary school.
~~~
danaris
Last time I tried using Basilisk II, I managed to get it to be stable (running
System 7 in an emulated...IIsi, I think?), but if I ever tried to change
_anything_ beyond adding and removing disks, it just wouldn't run.
I haven't tried SheepShaver much, I admit, because most of the stuff I want to
run was written for 68k and runs poorly or not at all on PPC.
------
asciilifeform
Author of linked piece speaking. Seems like many readers continue to miss the
essential point, just as they did in 2011:
Hypercard wasn't a gem of graphic design (1-bit colour, plain line graphics)
or of programming language design (one of the many laughable attempts at
"natural language programming") or of high-performance number crunching... but
it was _simple_. I.e., the entire system was _fully covered_ by ~100 pages of
printed manual. It _fit in one 's head_. (And did not take 20 years to fit-in-
head, either, an intelligent child could become "master of all he surveys" in
a week or two.)
Where is the _printed_ manual for the current MS VB, or the WWW's
HTML/JS/CSS/etc stack (yes including _all_ browser warts), or for any of the
other proposed "replacements" ? How many trees would have to be killed to
print such a manual, and would it fit in your house? Could you read it cover
to cover, or would die of old age first?
~~~
314
Do I recognize your name and writing style from Kuro5hin?
~~~
asciilifeform
Doubt it: I have not written at Kuro5hin. (Possibly linked there at some
point, however?)
------
RodgerTheGreat
When Hypercard was new, you could use it to create user interfaces that
_looked_ and _felt_ exactly like all the other Macintosh applications of the
era- within a reasonable margin, anyway.
Basic expectations are now higher. UIs are more visually complex and subtle.
You can't just freehand them with a rectangle tool and a few canned textures.
UIs are expected to reflow to devices with different screen sizes and dot
pitches. Users expect their programs to give them up-to-the-minute information
from the internet, with slick animation and nuanced typography. With
flexibility comes inescapable complexity.
Could you make a tool as easy to use as Hypercard, and let modern users create
useful applications for themselves? Sure, but the results wouldn't be nearly
as nice as the applications built with conventional tools that live alongside
them. I have a tremendous fondness for Hypercard, but I don't think you could
ever make something today which was empowering and full-featured like
Hypercard was for _its_ day.
~~~
justinator
_You can 't just freehand them with a rectangle tool and a few canned
textures._
I mean, why not? If I'm a small business, and I need a custom (in-house) app,
AND I could do it myself with little training, I'd rather do that then not
have the business. (Again) how many businesses run on Excel Spreadsheets? Not
every business can hire a programming firm to make a boutique app that's used
by 100 people.
But I'm also one of those, "Why did FileMaker Pro die?" kinda person. And I
still write Perl. I'm a weirdo, but I'm a pragmatic weirdo that likes to get
stuff done.
~~~
scroot
I do a lot of freelance and I can't tell you how many businesses I've worked
with who had some need that was just a smidge more than their current software
would allow. If there was something in between using an interface and learning
a full-fledged programming stack from first principles, it would go a long
way.
Part of this goes towards getting people -- both today's mainstream developers
and users -- to think hard about what using a personal computer should mean.
I can't think of a better definition of programming than "telling a computer
what to do." That should be the first hint.
~~~
TeMPOraL
That's part of the reason why Emacs has this kind of cult following. There's
always something you need that's "just a smidge more" than your Emacs can do,
but it's _very_ easy to just code up that missing bit. It might be not for
casual user (requires learning some basics of programming), but then again,
the legend goes that secretaries used to use Emacs and like it, for similar
reasons people like Excel today - they need something extra, they can make it
themselves on the spot.
~~~
eitland
And another thing people should learn from that, never underestimate
secretaries, accountants etc.
It is often good to limit what non admins can do to the system, servers, their
computer os etc.
We should also limit who has access to sensitive data.
But I think often companies could be more productive if they taught their
employees to script a bit.
~~~
AnIdiotOnTheNet
> And another thing people should learn from that, never underestimate
> secretaries, accountants etc.
It is difficult for the modern developer to think of users as anything other
than dumb cattle, because doing so would raise ethical concerns about their
high-engagement eyeball-farming products.
------
II2II
There were many reasons why Hypercard died:
Hypercard was very much a product of it's time. Software was far less complex
so it was much easier to create a general purpose development tool for end
users. The author argues that attempts to recreate Hypercard try to do too
much, that is because users expect more of software today. To give you an idea
of what I mean: I used Hypercard to create a cookbook for a family member in
the mid-1990's. You could enter recipes, search for recipes, and even export
them to HTML. While this would have been an amazing accomplishment a few years
earlier, the program was a relic of a bygone era the moment that it was
written. Adding useful features, such as uploading those HTML files to a web
server, may have been possible but would have required extensions to the
language. Plenty of extensions existed, which is how companies like Cyan
managed to produce an amazing for the time multimedia game (Myst) on what most
people viewed as a stack of programmable black and white index cards. Yet
extending Hypercard to reflect the technology of the 1990's would have
transformed a product for everyone to an incomprehensible mass for anyone
aside from developers. And the 1990's were primitive compared to the 2010's,
never mind the coming 2020's.
In a similar vein, people's interests changed. The mid-1990's brought the web,
so people were far more interested in developing for the web. The early days
of web development were quite accessible, was focused upon content (much as
Hypercard was) and allowed people to embed programs within that content (much
as Hypercard did). While Hypercard may have been better for some things and
certainly provided a better development environment, it was also obsolete.
As much as I loved Hypercard, the reality is that it was neglected rather than
buried. It's longevity could have been improved without increasing complexity,
such as adding colour or allowing multimedia files to be embedded (without
resorting to third-party extensions). On the other hand, it would have died
off eventually. The trajectory of web development shows how a once simple and
accessible platform can become so complex that it takes a dedicated student to
learn.
~~~
scroot
Another thing to think about is the metaphors used. Stacks, cards, and
interactive objects on cards were really good metaphors that fit holistically
in the Apple personal computing systems of the late 80s/90s. You're right that
a lot of the computing environment has changed (eg the Internet is like a
natural resource and should be assumed). Whether it is needfully more complex
is another matter.
A key question for designers today is this: what are the metaphors that would
work for today's computing environments in a similarly holistic way, allowing
users to become "authors"? This is something different than a modern Hypercard
clone. I think when people want that old "Hypercard feeling," what they really
want is this holistic nature, power, and ease of use for regular users -- not
simple clones of the old thing.
------
dantondwa
When I was a kid, at the beginning of the 00s, and I was beginning to
experiment with computers, I imagined the future of computing to be about
reducing the complexity of software development as much as possible. I now
realize that I dreamt of something exactly like HyperCards. A world where
machines could be programmed using something resembling natural language and
where anyone could create and share software. At the time, the closest thing I
could find was a Word document where I dragged and dropped buttons to create a
pseudo UI and then imagine that it worked. I wish I had access to something
like this.
I believe that by not having a tool like this, we are taking away from the
majority of the population, and in particular the young, the possibility of
shaping computers for their own needs. A decision has been taken: the user has
to be a passive consumer. It's a mistake and I am really sad about it. It
feels, to me, as if there was a decision, intentional or not, of keeping the
creation of software as a privilege of a few and not a right of everyone.
~~~
brazzy
> A world where machines could be programmed using something resembling
> natural language and where anyone could create and share software.
This is a silly pipe dream almost as old as computing, which reappears every
couple of years in a new guise. When COBOL came out, it was promoted with
exactly that promise, and there were people who seriously claimed that within
a few years there would be no more professional programmers. More recent
incarnations of the pipe dream were "fifth-generation programming languages"
and "model-driven development".
In reality, what keeps "the creation of software as a privilege of a few" is
not a lack of the right tool, but the lack of the _mindset_ of producing
unambiguous, complete instructions and developing your requirements to a state
where it's possible to give such instructions, and facing the many mistakes
you end up making on the way.
That is actually something very difficult to do, and which Average Joe will
_never_ learn, no matter what tools you give him.
~~~
tabtab
A more practical goal is to program in a language or API close to the domain.
------
slowmovintarget
Amid the ranting, the one thought-provoking statement was this:
> The reason for this is that HyperCard is an echo of a different world. One
> where the distinction between the “use” and “programming” of a computer has
> been weakened and awaits near-total erasure. A world where the personal
> computer is a mind-amplifier, and not merely an expensive video telephone. A
> world in which Apple’s walled garden aesthetic has no place.
HyperCard made Macs function as personal computers (the original notion behind
the term) and not as digital appliances (video editors, gaming rigs, media
boxes... etc.)
~~~
eridius
All programming environments do that. And Apple continues to this day to give
away a free programming environment for your Mac.
~~~
slowmovintarget
All programming environments do that for programmers. Not all do that for
people who aren't software developers.
Case in point: Some of the finance people I work with build incredible
spreadsheets. Formulas upon formulas that turn their special spreadsheet into
what is really an application. They're programming, but that isn't how they
think about it (I've asked). They're just Excel power users.
HyperCard aficionados were the same way. They were programming with the
assumption that they just knew HyperCard really well.
You wouldn't be able to sit Douglas Adams (were he alive) in front of XCode
and get the equivalent of his HyperCard stack out:
[https://archive.org/details/DouglasAdamsMegapode](https://archive.org/details/DouglasAdamsMegapode)
The reason is that it requires a dramatically higher investment to create in
XCode. "Just learn Swift" doesn't really cut it as an alternative.
~~~
trop
As an example, a decade ago I taught art students how to code. They had
Macintoshes. I had them download Emacs then taught them Emacs Lisp. As insane
as that sounds, it was interactive, easy to install, and interesting. They
were able to write programs in the REPL after the first class. I wouldn't have
dared to have put them in front of XCode or some equivalent IDE.
~~~
mycall
Why didn't you use MaxMSP or Processing?
~~~
trop
A fair question. An easy answer is that I thought Lisp was pretty cool, and
that the students were capable of it. And, pragmatically, I was experienced
with it, unlike MaxMSP or Processing.
But there was something philosophical: I liked the idea of introducing them to
the feel of fundamental computer science, rather than a nice layer over
computers for artists. (And what artists actually use those nice layers?) I
wanted them to experience an open source ecosystem (unlike Max), and I felt
that the Java-base of Procesing was off aesthetically -- too many layers over
the machine.
Thinking about the OP, I do wonder what it would have been like to teach
coding-for-artists with Hypercard. One thing I love about Emacs Lisp is that
it is only grudgingly visual. That it returns computers to something which
sends output to a TTY. It gives people who have grown up with a GUI a vision
of what computers were like before. I like that Emacs (and Lisp) are things
with a history and touch on earlier eras of computer culture.
------
api
It's a special case of a more general trend: the complete abandonment of what
were once called RAD (rapid application development) tools.
Microsoft also killed WYSIWYG Visual Basic (pre-dotnet), quite possibly the
most productive GUI builder ever made. Not quite as clean as hypercard, but
another example of that bygone era when the user was the customer and not the
product.
The only remaining representative of that era is the spreadsheet, and it seems
pretty solidly locked in. Still I do see a push to replace spreadsheets with
opaque SaaS tools that do specific things.
~~~
slowmovintarget
My first real software dev job was with Visual Basic 3.0, then Delphi 1.0.
This World Wide Web thing was about to happen, but hadn't quite yet.
Compuserve was where the cool kids hung out, and I ran a WildCat BBS with a
buddy out of his basement.
RAD was rad, for a while.
~~~
mycall
WWW was in effect for many years before 1995, although that was its take off
year.
------
redleggedfrog
His acknowledgement of the elephant in the room ("... shit soup of
HTML/Javascript/CSS...") is prescient for 2011. Today it's even worse. How
many versions of lipstick on pig are we to endure (Backbone, Angular, React,
Vue..., Bootstrap, less, sass...) before someone actually makes a sane UI
development toolkit?
Think of the children.
~~~
quickthrower2
The problem is the web was designed for documents not apps. And JS wasn’t
designed for what it does now. The closest thing we had to a sane development
toolkit was Adobe Flex. Someone came up with that and it was ok. But other
more powerful forces particularly Apple made it disappear. Google is in charge
now maybe it’s up to them.
~~~
devnulloverflow
But a lot of the web is in fact serving documents, and using complicated apps
to do it.
It's not surprising, as the two categories do really merge into one another. A
web store is certainly an app -- but it's also the modern equivalent of an
store catalogue, which is a document.
------
TheDong
The author seems to attribute malice to the death of Hypercard, something akin
to "programmers only stand to lose value if users can help themselves".
I think a far simpler explanation is not that control to the ability to
program has been intentionally deprived from users, but rather that the vast
majority of users have no wish to have such a tool, and that Steve Jobs
recognized this.
I think that some proof of this exists: Excel, Visual Basic (and its
formbuilder), Applescript, Autohotkey, etc all exist.
Those tools are all intentionally approachable, as was hypercard, and yet the
vast majority of iphone users would not wish to deal with those tools or
ecosystems. They would much prefer a company do something called "software
development" and give them a working specialized application.
It seems more likely to me that Hypercard was eventually seen as a dead-end,
not as a threat to the existence of selling apple software. Sure, you could
spend significant effort on building a tool that is powerful for novice users
(letting beginners create basic automation and forms easily), but less
powerful for experts (providing less powerful design, abstraction, and
programming features than more typical programming languages).
~~~
Semiapies
It was apparently gone before Jobs got back, but the same point still stands.
The people who put together accounting systems and the like in Hypercard were
unusual in being willing (and having the time) to put forth the effort of
building anything complex. They're almost certainly the same people who would
have put something together in GW-Basic, because they needed what they were
building.
I'm a professional programmer, and I've got enough in my plate to be _happy_
to let clients build some portion of things. But they almost entirely _won
't_, even when given software designed to let them do just that. I've never
encountered any sort of form-builder or report-generator or whatever that was
designed to work for clients "without needing a programmer" where clients
didn't end up asking programmers to do all the form-building, report-
designing, anyway.
It's like they have jobs of their _own_ to do or something. It's like division
of labor _makes sense_ and is inevitable in any non-tiny organisation.
Back when everything was on paper, a technology virtually _everyone_ could
access by reading and writing (or typing), lots of people still had
specialized jobs based around creating and organizing documents within
organizations. Hell, most people are perfectly able to clean and vacuum, but
beyond a certain size of 9-to-5 organisation, places hire janitors.
~~~
TeMPOraL
> _But they almost entirely won 't, even when given software designed to let
> them do just that. I've never encountered any sort of form-builder or
> report-generator or whatever that was designed to work for clients "without
> needing a programmer" where clients didn't end up asking programmers to do
> all the form-building, report-designing, anyway._
Did you interface with the individual workers, or company as a whole? My
experience is that usually, the software _doesn 't_ let workers do what they
need, the workers end up working around it with Excel, and at some point
information will trickle down through two departments and three layers of
managers that some piece of software they're paying for needs adjustment.
> _It 's like they have jobs of their own to do or something. It's like
> division of labor makes sense and is inevitable in any non-tiny
> organisation._
Exactly. But their jobs are almost never correctly captured by software, so
people get creative and invent their own workarounds. The unending popularity
of Excel is a great example of this. Asking someone else to fix your software
for you has large communications cost and time delay; asking programmers has
also a large monetary cost.
~~~
Semiapies
_My experience is that_
And mine isn't. Among some of our clients, being able to do anything in Excel
is an unusual skill. Among others, they do nothing more than basic spreadsheet
work, using nothing more advanced than cross-sheet references. In twenty years
of this job, I haven't run into _any_ of the fabled Excel spreadsheet apps.
I'm sure they're out there, but I'm also sure there's plenty of the world that
doesn't use them.
------
gdubs
Owning a Mac was a very self-selective thing back in the day, and at the time
the “bicycle for the mind” ethos was a deep part of Apple’s marketing.
HyperCard fit that perfectly. So, if you had the money and inclination to buy
a Mac, you were also buying into a philosophy and a promise. That meant that
regular people, not professional developers, were engaged and interested in
making the most of their (expensive) machines. HyperCard was simple to use,
and the return on investment in learning how to build with it was immediately
clear.
Imagine you’re a botanist with the spare money to buy a Mac. You could spend a
few hours with HyperCard and put together a beautiful plant database, full of
pictures and hyperlinks. I love those kinds of stories from the HyperCard era,
and it’s a vision of the power of computing that still resonates today, and is
a big reason why stories on HyperCard are perennially popular.
In today’s smartphone era we have millions of pre-made apps ready to consume
and use. It’s amazing in many ways, but if you think Xcode, or JavaScript and
HTML are the natural successors to HyperCard than you don’t really understand
what was so magical about HyperCard.
------
soapdog
Hey, I took some time to write a reply to that post showing a modern day
alternative:
[https://andregarzia.com/2019/07/livecode-is-a-modern-day-
hyp...](https://andregarzia.com/2019/07/livecode-is-a-modern-day-
hypercard.html)
I posted another comment linking to a modern day HyperCard that was downvoted
without any explanation? I don't understand why it happened. Can someone tell
me?
------
jakobegger
Hypercard was a very general purpose app, but it was a bit of a "jack of all
trades, master of none". I loved Hypercard, but it was never really an actual
good solution for any of the things you may want to do with it.
1) Lots of people tried to use it for making games, but anyone who tried
quickly ran into its limitations. There were a handful of neat Hypercard
games, but when I tried to see how they were made, I saw that they made heavy
use of XCMDs and XFCNs -- custom extensions you had to write in a compiled
language (Pascal? C?). At that point, why would you bother with Hypercard at
all?
2) Another common use case were simple databases, like an address book, or a
stack for all your music cassettes, or something like that. But it lacked
search tools and a way to work with bulk data, so it wasn't really a good
solution for databases. Filemaker was much better for databases.
3) Or you could use it for simple specialized calculators, like a currency
converter. But it was a lot of work to set it all up correctly, and Excel was
actually a lot more useful for these kinds of tasks.
~~~
scroot
> I saw that they made heavy use of XCMDs and XFCNs -- custom extensions you
> had to write in a compiled language (Pascal? C?). At that point, why would
> you bother with Hypercard at all?
Hey, don't forget about shareware culture! You didn't need to code up these
XFCN/XCMDs yourself. You could try and buy them! There was a whole ecosystem
of developers making these things (WindowMaker anyone?) and it worked pretty
well for what it was.
------
vincent-toups
May as well plug a similar article I wrote like 10 years ago now:
[https://procyonic.org/blog/duckspeak-vs-
smalltalk/](https://procyonic.org/blog/duckspeak-vs-smalltalk/)
I actually passed a few emails back and forth with Alan Kay for this.
I short version of this piece appeared as a letter to the editor in the New
Yorker, too.
~~~
carapace
> The certificate for procyonic.org expired on Thursday, June 6, 2019.
> Error code: SEC_ERROR_EXPIRED_CERTIFICATE
~~~
vincent-toups
Oops! Thanks for the heads up!
------
jancsika
1\. What's the equivalent of HTML5 canvas in Hypercard?
2\. Suppose there's a little dataflow drawing program somebody else wrote in
Hypercard. I want to add a binding for my own transient card to pop up on
keyboard shortcut, gain focus, steal all input events, then go away on another
keyboard shortcut (and return input events to their original bindings). How do
I do that?
3\. Suppose Hypercard flourished into the 90s and early 2000s. How would this
alternate-reality Hypercard have dealt with the pop-up ad problem?
I'm sure there are answers to these questions. However, a theoretical
Hypercard that solves them-- plus all the other myriad problems that come from
existing on a weaponized internet-- would make this tech much closer to the
complexity of the modern web than to the original Hypercard.
~~~
wolfgang42
For your #2:
HyperCard is primarily mouse-oriented; I don't believe there's a way to add
keyboard shortcuts (without writing an XCMD in another language like Pascal).
Instead you'd add a button into the other stack (probably on the background)
to 'go to "UtilityStack"'.
This would take you to the first card of the stack you'd created, which
naturally takes over since it's the one being shown (no need to explicitly
'steal' events). Once you're done there, either use Go > Back (Cmd+B, I
believe) or have a button on the card with the script 'go back'. This takes
you to the previous card the same way that the back button in your browser
would, and likewise events now go to that card with no explicit rebinding
required.
~~~
scarface74
I think you could add keyboard shortcuts. You could definitely add menu items
([http://folkstream.com/muse/teachhc/menu/menu.html](http://folkstream.com/muse/teachhc/menu/menu.html))
and you could intercept keydown events at the stack level.
~~~
wolfgang42
Huh, interesting. They must have added that in a later version than I was
using, my copy of "The Complete HyperCard Handbook" doesn't mention "create
menu" or keyDown handlers at all. Thanks for the correction!
------
lioeters
Another article that pops up regularly on HN, on this relevant topic:
The coming war on general-purpose computing
(previous discussion:
[https://news.ycombinator.com/item?id=19872364](https://news.ycombinator.com/item?id=19872364))
There are a number of admirable attempts to create the "next Hypercard" on the
web. I haven't encountered one that really sticks, but as an old(er) timer who
remembers the earlier era of personal computers - where the user was really
the user, not "used" \- I see that it's an important dream to keep alive,
especially in our current cultural context of an increasingly exploitative
web.
------
japanoise
Loper-os seems to suffer from the same problems as a lot of "unix hater"/lisp
weenie types: while he does point out real flaws in the computing of today, he
seriously lacks in a competent, coherent, non-vaporware alternative. His blog
is now nothing but bloviating about bitcoin
~~~
asciilifeform
Author of linked site speaking. Are you sure you are reading the same page? I
had half a dozen articles re Bitcoin, the last in 2014...
~~~
japanoise
Your most recent article, [http://www.loper-os.org/?p=3440](http://www.loper-
os.org/?p=3440) is tagged bitcoin, and in fact if I click on the tag, I see an
awful lot of articles written in the last few months [http://www.loper-
os.org/?cat=42](http://www.loper-os.org/?cat=42)
~~~
asciilifeform
I do have a multi-chapter series on constant-time integer arithmetic -- indeed
tagged incl. "bitcoin", as it is meant to be used in (among other places) a
rework of the client.
You evidently spent at most 3 seconds reading the link; wouldn't kill you to
spend 5 seconds and see what is behind the tag.
------
_bxg1
This is a pretty incendiary and bitterness-laden rant. I enjoyed the trip back
in time but the last paragraph, for me, pretty much nullified any actual
points the author was trying to make.
~~~
braythwayt
I empathize with your feelings, but iobjectively, if an author makes three
good points and then pours toxic emotional sewage all over them, the good
points are still the gooid points.
We may not want to read the author esposuing them any further, who needs toxic
sewage? But the sewage cannot nullify a point that we recognize as true. Its
truth is independent of whether the author is a goofd or bad person. Its truth
is independent of whether the author argues it well or poorly.
It's unfortunate when good points get covered in toxic emotions. I wouldn't
say that we should read such things whether we like it or not, but having read
them, I say we take the good points and repackage them in less vitriolic
prose.
~~~
_bxg1
It undermines his judgement. He makes sweeping, subjective arguments, which
isn't inherently a bad thing but it means those arguments are more than just
cold facts. The amount of heated emotion he apparently has around this subject
means that his ability to draw conclusions from "soft" information is
compromised.
------
dang
Discussed at the time:
[https://news.ycombinator.com/item?id=3293657](https://news.ycombinator.com/item?id=3293657).
------
mgbmtl
I also had an early accidental contact with HyperCard, and sort of learned the
basics without knowing it. I was looking for something equivalent to show my
12 year old, who is into Minecraft and generally open minded about learning
geeky stuff (they've had Sketch and electronics basics in school, likes to
plays with LEDs, 3d print, etc). I found Sketch a bit too limited, despite a
few attempts, it didn't work out for me.
I don't know much about game programming, but we ended up exploring Unity 3D.
There are tons of videos online, it works well under Linux, and you can do
lots of different things with limited programming. Probably not perfect, it's
just for playing around, but curious to hear other experiences.
~~~
PinkMilkshake
If Unity worked out well then Godot could be a good choice. It’s conceptually
simpler but is still very powerful.
~~~
follower
I had just signed in to suggest Godot myself. :)
For the GP, here's the Godot web site:
[https://godotengine.org/](https://godotengine.org/)
While Godot has its idiosyncrasies, in terms of being a tool that made it
possible to actually get something done and _playable_ , I've been pleased
with the results I've gotten from it so far.
The ability to export to HTML5 in addition to desktop & mobile is great for
getting a project in front of people quickly--particularly useful for things
like game jams: [https://rancidbacon.itch.io/sheet-em-
up](https://rancidbacon.itch.io/sheet-em-up) (which I created as my first game
jam entry :) ).
Godot is also released under a MIT license and has multiple full-time
developers working on it (supported in part by monthly community donations).
------
8bitsrule
Here's what Atkinson said about it in 2002:
[https://www.wired.com/2002/08/hypercard-what-could-have-
been...](https://www.wired.com/2002/08/hypercard-what-could-have-been/)
More clues in this (2016?) _Twit_ interview (including the phrase
'Sculleystink' at about 16m in):
[https://www.youtube.com/watch?v=INdByDjhClU](https://www.youtube.com/watch?v=INdByDjhClU)
~~~
Torwald
I am a fan of Leo, but he also tends to fall on my nerves with constantly
interrupting people. Atkinson is great in just not letting that happen, FUN!
------
galaxyLogic
I think Web killed Hypercard. Hypercards were kind of components and that was
their power. Create onbe card at a time then connect them together. But so
were/are web-pages.
But you could not connect a hyper-card to another card created by another
author executing on another machine. That was the mind-blowing concept of web
and it quickly became clear that that is what we wanted to do, not dabble with
our own stack of "hyper" cards.
For one thing web was not tied to Apple hardware
------
nlawalker
Hypercard's "letting regular people build useful things" ethos lives on in
IFTTT, Zapier and Microsoft Flow. In 2019, it's not super useful to many
people to be able to mush together some buttons and textboxes into a desktop
UI, but it's fantastically useful to be able to mush together their email,
calendar, SMS, Instagram, Facebook, Twitter, Dropbox, Spotify, Alexa, home
automation etc.
~~~
TeMPOraL
Sorta, kinda, not quite.
That's the problem with SaaS - they take your data, silo it up, and expose
through highly limited and tightly controlled APIs. Sure, I can make IFTTT
save a copy of every picture I upload to Facebook into my Dropbox account (or
at least I could, I think it broke). But I can't make it save that picture _in
original size_ , because someone didn't expose some option, or an endpoint. Or
I can't make IFTTT save all the reactions to my post into a CSV in the format:
<<datetime, which post, who, what reaction>>, because such endpoints again
aren't exposed or integrated. Etc.
I get that IFTTT & folks are doing the best they can, but the companies
they're trying to "mush together" made their money on preventing
interoperability, so it's still very far from both what could be and what
_used to be_ possible.
~~~
alexis_read
You could try nodered-self hosted or cloud options. It has plugins for most
stuff including GPIO, MQTT, HTTP, email, twitter.
[https://nodered.org/](https://nodered.org/)
------
z3t4
Business and engineering rarely goes hand in hand. So you made something that
never wear and tear, boss orders you to redo the formula so it has to be
replaced more often, resulting in more sales. Or you made something that
enables the user to create infinitely amounts of what your company sell, boss
tells you to erase all copies of it. etc.
------
drngdds
"If you disagree with my barely-argued opinion, you're autistic" isn't a great
way to close an article
------
PeterStuer
Hypercard was obsoleted by the www. To survive it would have had to become a
visual HTML editor. If you are looking inside the Microsoft backcatalog it is
not VB that was the Hypercard heir, it was Frontpage for consumer-and Infopath
for business applications.
------
LeoPanthera
archive.org now has the ability to run uploaded Hypercard stacks in your web
browser.
Example:
[https://archive.org/details/hypercard_autodiagnostics-25](https://archive.org/details/hypercard_autodiagnostics-25)
------
mgamache
I tried to get a license for SuperCard (it had color), but had to use resedit
+ HyperCard to add color textures... Miss those days (not so much)
off topic (but same era) I loved MacProject... yet another Claris step child
software. Moof!
([https://en.wikipedia.org/wiki/MacProject](https://en.wikipedia.org/wiki/MacProject))
------
mgamache
Hey I just noticed that SuperCard is still alive and offers HyperCard
migrations:
[https://www.supercard.us/](https://www.supercard.us/)
(SuperCard was a commercial HyperCard alternative)
------
jdlyga
I spent a lot of time as a 7 year old playing around with Hypercard. Gave me a
lot of good early exposure to building interfaces.
------
mgamache
Okay this may be unpopular, but I felt like Microsoft VB 3.0 (for me) was a
better version of HyperCard on a worse platform.
------
dnicolson
I wonder why there was a "Scripting language" dropdown menu, was anything but
HyperTalk ever possible?
------
rmrfrmrf
From my perspective, HyperCard lives on in 2 distinct branches: webapps and
PowerPoint.
------
tluyben2
LiveCode is pretty nice. If you want something _fast_ & crossplatform it is
kind of hard to beat. I find the language awful (far too verbose for my taste)
but I would say most people here would have that going in 1 to a few days to
be productive. The forum is active and people answer fast when stuck.
------
crb002
How hard would it be to port HyperCard to Qt? No reason we can't have it back.
~~~
braythwayt
If the LiveCode people can get HyperCard running on *nix, I'm sure HyperCard
could be ported to Qt.
[https://en.wikipedia.org/wiki/LiveCode](https://en.wikipedia.org/wiki/LiveCode)
------
enos_feedler
I could imagine Playgrounds app turning into such a thing
------
genuineDSD
Uhm, I have grown up with Macintoshs and I still use them. So, my comment has
at least double the emotional intensity compared to the author's. ;-) In all
seriousness, I say this because it is clear that his post is mostly based on
an emotional attachment to a tool, not on critical thinking.
Maybe I misunderstood, but if I understood him correctly, the premise is that
Hypercard would still be around if it did not collide with Steve's vision of a
world of dumb users and smart engineers. This is wrong on so many levels, I
don't even know where to start. Are you willing to learn?
First, Hypercard was created when 13" 256-color displays were state of the art
(actually even earlier) and there were exactly two devices to interact with
your Macintosh: a keyboard and a mouse. So, for simple tasks, it was quite
easy to create a tool that would allow you to stick together a simple program
using clicks and a verbose scripting language. Nowadays, however, general
purpose applications are supposed to work on a variety of devices, from 4"
phones with touchscreens to desktop-Macs supporting 27"\+ screens being used
with traditional keyboards and mice. Maybe this will change in the future, but
if you want to precisely describe a structure (GUI-compontents) in a
generalized fashion, a textual representation (e.g. react-components), so far,
is just superior to any GUI-tool that is around.
When hypercard was created, VCS (such as git, svn, etc.) weren't really a
thing. Most software was developed in about the same way as you created
Hypercard projects: You made changes to your main copy and that's that. Today,
you don't even think about starting a software project without having vcs in
place. Similarly, when Hypercard was created, many software methodologies
weren't a thing: Unit Testing, Integration testing, etc.
Now, I am a software engineer, and while I never wrote Hypercard applications
myself, I once found myself maintaining an Filemaker-Application. Filemaker, I
reckon, is very similar to Hypercard in that you plug together your app using
a GUI and some overly verbose, pseudo-simple scripting language. And, needless
to say, this was an absolute disaster: In the beginning, it was a simple tool
that automated a couple of tasks and it was created in a very short period of
time, thanks to the easy-of-use of Filemaker. However, as with all other
tools, it grew in complexity. Now, ever tried to track changes in the source
code using Filemaker-files? Ever tried to unit test Filemaker-code?
And don't get me started with the absolute ludicrous idea to use programming
languages that resemble a natural language. Claiming that this is as effective
as using an abstract language is akin to describing complex mathematical facts
using only a natural language—while possible, it is completely unfeasible.
------
ptx
Where is the eval call in the hypertalk script? Is it "value of" applied to a
string, i.e. "value of value of card field"? (I assume it's not simply "value
of" since that doesn't seem like it would work for the operator buttons in the
example - or does "/" or "+" evaluate to itself?)
This seems like a good reason to kill the language at least, in the Internet-
connected era. In a language that's supposed to read like English, eval should
be "dangerously execute untrustworthy text from ... as program instructions"
or something, not a polymorphic call to a common operator accidentally applied
to the wrong type.
------
NikolaeVarius
The fallout from being able to used to cause people to suffer brain damage
when opened also probably contributed
|
{
"pile_set_name": "HackerNews"
}
|
Flexbox Patterns: Build user interfaces with CSS flexbox - micaeloliveira
http://www.flexboxpatterns.com
======
mdorazio
Flexbox is great, but honestly I'm not going to spend any time trying to
master it until IE supports it well. I know there are workarounds with
conditional styling rules and other fallbacks, but that just means I would
need to write more code instead of less. Hopefully IE11 will patch some of the
bugs and then when IE 8+9 finally drop to a small enough percentage of share
in another year or so, it will be time to jump on the flexbox bandwagon.
~~~
juliangoldsmith
You could always use a polyfill:
[https://github.com/10up/flexibility](https://github.com/10up/flexibility)
~~~
ncallaway
I just discovered this yesterday. I'm excited to dig in and make sure it works
well, but if it does it's very likely that we'll start using flexbox in our
production stuff sooner rather than later.
------
indubitably
This is a great tutorial, but why obfuscate the CSS as SCSS? In most cases
here there’s essentially nothing gained.
~~~
bobwaycott
How exactly is SCSS an _obfuscation_ of CSS?
~~~
davegauer
'Obfuscation' might be taking it a bit far, but demonstrating a CSS feature in
_anything_ other than vanilla CSS is, in my opinion, a really poor choice.
It's like having an explanation of a JavaScript feature...in TypeScript.
~~~
bobwaycott
I'll happily agree. I was only pointing out that obfuscation of CSS was not
happening.
------
based2
Flexbox Froggy: A game for learning CSS flexbox
[https://news.ycombinator.com/item?id=10652909](https://news.ycombinator.com/item?id=10652909)
|
{
"pile_set_name": "HackerNews"
}
|
A Non-Sucky Twitter Business Model - jgilliam
http://3dna.us/a_non_sucky_twitter_business_model
======
jarin
That seems awfully counterproductive, and it would take about 30 minutes for
someone to start republishing paid users' content.
Here are some ideas that I think would work way better:
\- In-stream advertising (ala Twitteriffic)
\- Charge a couple of bucks for mobile apps (or mobile app API access)
\- Charge for premium, business-oriented features (multi-user accounts,
autofollow, analytics, ability to receive DMs without following someone,
custom CSS on profile page)
\- Charge to vote on American Idol via Twitter
~~~
bigiain
"it would take about 30 minutes for someone to start republishing paid users'
content."
That'd be something that Twitter should be able to keep on top of, at least
for naive automated retweets...
The more I think about it, the more I think it might work. Especially if they
can get micropayments worked out.
~~~
jarin
Well, if anyone had any doubts:
<https://twitter.com/#!/freenyt>
------
flyosity
It's a good idea, but only if the account is posting information that's worth
the fee. Like the author says: MP3 downloads, exclusives, behind-the-scenes
stuff will be great for bands, but I think for non-celebrity accounts it might
be tougher to come up with valuable information. For retailers, coupons are
obviously the way to go, but I'm having trouble coming up with lots of other
good content examples.
------
Skywing
You'd have to be prepared for Twitter to become the Apple App Store - there
would be 2 of everything. "FooAccount" and "FooAccount (FREE)", for example.
The free ones would just advertise content visible from their paid one.
------
blhack
What about something like viglinks? Turn every bare outbound link from
twitter.com into an affiliated link, turn every link from t.co into an
affiliated link, etc.
Yes, people will be able to avoid this by using shorteners other than t.co, or
by using a third party client, but if what twitter is saying is correct (the
majority of users use the official client or use twitter.com), then this
shouldn't be a problem at all.
Are they already doing this? It seems incredibly obvious, it's totally
transparent to the users (I don't imagine that any of the users would be
against this), everybody wins.
------
gersh
How about charging for longer tweets, or let you make your tweets have a
larger font for a fee.
------
phlux
I really think this is a great idea - and would work well along with the
following:
I recently went through several rounds of interviewing at twitter (didnt get
it) - but had I got it, I was planning on submitting the following as a model
suggestion:
Twitter has proven itself as a very valuable communications/pr channel for a
vast number of celebrities, news sources and brands.
Twitter should provide a more broad content platform to the critical mass of
influential people, sources and brands by allowing for larger content to be
hosted on Twitter.com itself.
Effectively visualized as a slide-to-the-right "extended content" panel that
would allow celebrities, as an example, to have exclusive content and articles
hosted that are directly accessible from twitter, and the twitter client
etc...
You would continue to have your regular succinct content flow - but you can
drive traffic to deeper-dives on your expanded panel (It effectively allows
for full-length content to be hosted in-line.
This can be monetized by twitter in a rev sharing manner such as you suggest.
If you searched for something, you could be driven to the extended content
pages and then be shown the contextual stream of tweets that apply to that
content as well.
Twitter has two options; find a way to properly monetize the message format
they have, or modify their offering. There could be a good hybrid as well in
this approach.
|
{
"pile_set_name": "HackerNews"
}
|
Think Positive. Be Positive. Stay Positive. - icodemyownshit
http://nickfenton.com/2009/12/08/positive-vibe/
======
jodrellblank
For hundreds of years, Britain (a country of now only 70 million people,
smaller than New Zealand) has been world economic power, an empire building
nation, involved in the winning side of wars and world wars, been at the front
line of progress in politics, democracy, exploration, classical, electrical
and electronic engineering and has produced one of the top five most used
languages in the world by native speakers, or top two by total speakers.
What's one of the main things you associate with British character aside from
queueing? A good dollop of grumpy negativity.
"For the English, Weiner claims, happiness is an American import based on
silly, infantile drivel. What the British like to be is grumpy, and they
derive a perverse pleasure from their grumpiness. British life is not about
happiness; it’s about getting by, he says." -
[http://entertainment.timesonline.co.uk/tol/arts_and_entertai...](http://entertainment.timesonline.co.uk/tol/arts_and_entertainment/books/article3516969.ece)
Stay grumpy! ;)
|
{
"pile_set_name": "HackerNews"
}
|
Show HN: Automate Multiple RSS Feeds in Mailchimp Newsletters - jamesq
https://fliprss.com
======
jamesq
We've just launched the public beta of FlipRSS and would welcome any feedback
on the website, brand and product.
It's been a fun 4 week project based on the needs of a client and it's been
great to ship beta.
|
{
"pile_set_name": "HackerNews"
}
|
Adam Carolla won’t let Personal Audio drop podcast patent infringement lawsuit - bennesvig
http://www.slate.com/blogs/future_tense/2014/08/01/adam_carolla_won_t_let_personal_audio_drop_podcast_patent_infringement_lawsuit.html
======
dicroce
Fuck "Personal Audio".
Go Adam. The world needs more people to stand up to bullshit like this. (sorry
for the language in this post, this sort of stuff infuriates me).
~~~
spacemanmatt
Go, Adam. "We thought you had more money" is a despicable reason to try to
back out of a law suit.
~~~
kevin_thibedeau
It should be grounds for classification as a vexatious litigant since it
clearly indicates that they're not serious about following through with a
lawsuit and are just running a shakedown operation.
~~~
pdabbadabba
I don't see why. Lawsuits are expensive, and as a plaintiff, you know you
aren't guaranteed to win. So if you discover that your expected recovery (p of
victory * the lesser of the expected damages or defendants' assets) is less
than the cost of maintaining the suit, why wouldn't you drop it?
Think about the consequences of automatically categorizing such plaintiffs as
"vexatious litigants." You would have a bunch of plaintiffs in court who
didn't want to be there, wasting their own time, the defendant's time, and the
court's time. It seems to me that nobody would benefit from that rule except
lawyers. Though if the lawyer is charging a contingent fee, they don't win
either. (Of course there is a separate question of how far into the case
should a plaintiff be able to drop the suit without forfeiting his right to
bring it again later, but there is already a set of pretty fair and
commonsense rules about this in the Federal Rules of Civil Procedure and state
procedural rules.
[http://www.law.cornell.edu/rules/frcp/rule_41](http://www.law.cornell.edu/rules/frcp/rule_41))
Sure, in this case Personal Audio's attempts to drop the suit might (or might
not) be just another symptom of their general sleaziness, but this sort of
maneuver is very common in all litigation, both legitimate and sleazy.
~~~
baddox
I agree. I think the patent system is bonkers, and these trolls are awful, but
in general it seems pretty reasonable to only sue a party for damages when
that party can feasibly pay the damages.
------
bravura
Aren't the details from the discovery process private? Can one party in a
lawsuit use information gleaned during discovery as part of its PR efforts?
From a press release from the patent troll:
"When Personal Audio first began its litigation, it was under the impression
that Carolla, the self-proclaimed largest podcaster in the world, as well as
certain other podcasters, were making significant money from infringing
Personal Audio’s patents. After the parties completed discovery, however, it
became clear this was not the case. As a result, Personal Audio began to offer
dismissals from the case to the podcasting companies involved, rather than to
litigate over the smaller amounts of money at issue."
[http://personalaudio.net/wp-
content/uploads/2014/07/Carolla-...](http://personalaudio.net/wp-
content/uploads/2014/07/Carolla-DIsmissal_7-29-2014.pdf)
~~~
pravda
No, details from discovery are not private.
But Personal Audio is lying in their press release. They want to drop the case
because Carolla is fighting back.
~~~
jakewalker
Depends on whether or not there is a Confidentiality/Protective Order in
place, and if so, what the terms of such an Order say. Almost every case of
this nature would be subject to some sort of Confidentiality/Protective Order.
Generally, the parties would be able to designate documents, deposition
testimony, etc., as Confidential. Sometimes there are multiple levels of
confidentiality baked into one of these orders (for example, a "HIGHLY
CONFIDENTIAL" designation might mean that only outside counsel can review a
produced document).
Generally, documents marked as such must be filed under seal, cannot be
disclosed publicly, etc.
~~~
skywhopper
Surely Adam Carolla's representatives can release characterizations of the
amount of money he's making off podcasting with his permission, regardless of
whether or not the details of his podcasting income were subject to discovery
or not. The plaintiff would reasonably not be able to release information it
sought from the defendant, but the defendant can obviously release his own
personal information at will.
------
ecaron
People can donate to his legal defense fund at
[https://fundanything.com/patenttroll?locale=en](https://fundanything.com/patenttroll?locale=en)
\- it has already raised nearly $500k!
~~~
kelukelugames
I feel conflicted. I like Adam but I think he is rich enough to not need my
help.
~~~
Domenic_S
In his Senate video he says the case post-discovery is costing $100k/month.
I'm sure Ace is rich, but 1.2MM/yr to blow on legal fees rich?
~~~
FireBeyond
Net worth of $15-20MM. Plus, there’s the possibility of these guys being
assigned costs.
------
sjtrny
Previous discussion
[https://news.ycombinator.com/item?id=8113757](https://news.ycombinator.com/item?id=8113757)
------
arms
Good for Adam. These patent trolls are reprehensible. It's hilarious that they
want to pull out now that they stand a chance of losing.
------
jobu
The publicity Adam Carolla gets from this case is likely worth more than the
legal fees from keeping the case going.
~~~
ugk
Never paid good lawyers, huh? Or just trolling?
~~~
jobu
I was completely serious. How much do you think the media coverage he's gotten
over this would cost if he'd paid for it as an advertisement?
~~~
mbrameld
According to another comment on this story, Adam told the Senate his legal
fees are $100,000 a month. I'm not sure what else he has going on
professionally but I doubt he'll see a big enough rise in podcast subscribers
to offset that.
|
{
"pile_set_name": "HackerNews"
}
|
Do You Need To Move To The Valley? - pchristensen
http://www.danmartell.com/do-you-really-need-to-move-to-the-valley/
======
joedynamite
I thought this was an awesome article. I saved up money and quit my job last
year to move from NY to SF. It never panned out, so I wound up being jobless
for 9 months until I found something new. I still want to move out there and
this just reinforces it. I'll get out there sooner or later.
|
{
"pile_set_name": "HackerNews"
}
|
Germ-Killing Brands Now Want to Sell You Germs - howard941
https://www.bloomberg.com/news/features/2019-04-22/even-clorox-and-unilever-want-the-booming-bacteria-business-to-thrive
======
october_sky
WSJ referred to this article with this summary: "The world’s best-known
antibacterial brands are now pouring millions into probacterial startups."
(from the WSJ daily digest email)
|
{
"pile_set_name": "HackerNews"
}
|
Ask HN: Old Investors want free stock in new company - what should I do? - throwaway16
I had a startup in the past that didn't work out. I had some angel investors who each invested about $20k. Now I do a new startup that is going really well and those old investors tell me that the right thing to do for me would be to give them some stock from my common for free. I really like them and am grateful for all they did for me in the past but I'm not sure if this is a common and right thing to do for founders/past investors or if they want to take advantage of me.<p>The new startup has nothing to do with the old. New idea. New team. New everything.<p>I dont mind the shares. I just want to be sure that this is normal behavior and common practice.<p>If someone has experience as founder / investor / lawyer with this kind of situation please help!<p>I could REALLY need some advice! Thanks
======
kgrin
Short answer: no. This is what distinguishes an investment in a limited
liability business (LLC, Corp) from, well, indentured servitude.
I'm assuming their investment in startup #1 was made freely and without
deception - they took a risk, like many other investors in many other
startups, and it didn't pan out. Had it, they presumably would have enjoyed a
return commensurate with the risk. Anyone investing in startups (or any high-
risk vehicle) should be aware of... well, risk. Assuming everything was above
board, they knew what they were getting into, and you don't owe them anything
more than your best efforts in that venture.
Now, all that is not to say that you should be a jerk about it. You certainly
should recognize the relationship, personal and professional and, say, let
them join any investment round in the new startup.
Bottom line: if they want to actually get involved in this venture, it
wouldn't be out of line to give them an opportunity to do so, perhaps on
slightly favorable terms. But if they just want something for nothing - I call
shenanigans.
------
joshu
This is not common practice. And it's pretty uncool. Note that while I am an
angel investor (...) I am in Silicon Valley, where things can be different
from the rest of the world.
Out of the 40ish startups I have invested in, the half-dozen or so VCs I am in
an LP in, and the hundreds more startups I merely interact with, I've never
heard of anything like this happen.
You giving them common out of your own pocket is also probably difficult from
a tax POV. Consult an accountant/lawyer/etc.
On the other hand, the losses from the previous startup do give them a tax
offset, so it's not like they walked away entirely empty-handed.
You can make an advisory board and grant them some options that vest over time
if they are going to contribute (as they think they are going to, if they
think they want to keep the relationship open.) But these guys sound uncool
and I would just avoid them in the future.
My gut says, however, no.
(BTW this is where a good lawyer comes in - they can tell you what is normal
in the industry.)
------
dkokelley
How was the old venture terminated? Was there an LLC or corporation that was
dissolved? I'm concerned that the previous investors might claim that although
this is a new idea and new team, the company is the same. How was each venture
established legally?
------
thrill
I've seen people offer the right of first refusal to old investors - i.e. they
get the first chance to decline any investment at the initial price (assuming
price will increase for later rounds)
------
tlrobinson
I've never heard of anything like that. Not common or normal as far as I know.
If you like them and think their advice/connections/whatever would be valuable
you could bring them on as advisors and compensate them with an appropriate
amount of stock.
But considering they're asking for free stock and telling you it's "normal"
I'd be wary of their advice...
------
throwaway16
Just to clarify - they never pressured me. They just said that this is common
practice and would be a good way for me to keep the good relationship going.
I also offered them to invest in the new company and they want to do it. But
they feel like getting some shares from my common on top would be only fair
~~~
dkokelley
Have a look at goodwill (
_<http://en.wikipedia.org/wiki/Goodwill_(accounting)*>).
Have there been any investments in your new venture? Anything that could help
set a valuation? If you wanted to _give* them equity, understand that it is an
arms-length transaction on your balance sheet, not a freebie.
If you really want to, you could give them a 'goodwill' deal. No shares for
free on top, but if they wanted to invest, they could invest on favorable (and
quantifiably so) terms.
While this isn't legal advice, I don't believe you have any obligation to
include your previous investors at all, much less provide them with favorable
(or one-sided) investing terms, unless of course there were special terms
dictated in whatever contract you previously had with them. If that is the
case, definitely get in touch with an appropriate lawyer to keep yourself
protected.
~~~
dkokelley
My syntax was skewed. Here is the real link:
<http://en.wikipedia.org/wiki/Goodwill_(accounting)>
------
geoffw8
No. They took on the risk of the investment, thats why they got the price they
got.
------
anamax
> I had some angel investors who each invested about $20k. Now I do a new
> startup that is going really well and those old investors tell me that the
> right thing to do for me would be to give them some stock from my common for
> free.
Would they have been willing to have their previous investment diluted on the
same terms?
They'll respond that they're not asking new investors for stock, that they're
asking you, but ....
------
_0ffh
I am pretty sure that this is not how it is supposed to work. OTOH you might
want to give me some stock for the good advise... =)
------
steveklabnik
Here's the correct response:
LOLOLOLOLOLOLOLOL
You may want to be slightly more polite to their face.
------
babeKnuth
Return the favor: Ask them for free money for your new company.
------
OmarTv
You dont need anyone whos been a founder /investor/layer (altough im one of
those three) dont let them pressure you. If they didnt invest on this new idea
thers no way they should have the right to ask you for that
------
kls
Here is my take on it, if it was your idea and they invested in it they gave
you a shot they intended to make money off of their money and your sweat, it
did not work out and that is that you have no obligation to them. You both
took a risk and it did not work out. That being said, they gave you the
capital to take a shot and there are a lot of people that would be very
grateful to get that shot.
So it is really contextual, where these guys scraping up their savings and the
rent to give you the shot if so they took a bigger risk in investing in your
idea than a guy that has several million lying around in the bank. To me that
would affect my decision on the subject.
At another level it is kind of strange that they feel entitled to revenue from
a new venture with new people who had no commitment to them. It is totally
independent other than the fact that you are involved.
For me, if someone bet it all on me by scraping together what they could and I
made it on a separate venture, I would try to restore them to where they where
before they made an investment in me. Now that is just me, there is no right
or wrong here. If it was a professional investor that knew the risk I would
tell them to fly a kite. Finally, I would not give them stock if you are doing
well and can spare the cash I would pay them back their investment and not a
penny more. They have no entitlement to any gain that this company has
produces and it actually robs someone that did take a chance on this go round
of rewards. Because quite honestly, one of the others like a first employee is
far more deserving of those shares than someone who took a chance on an
independent venture.
So for me, if the investment was made by someone who took a big risk I would
pay them back what they invested, no more. They are not entitled to it and
should be extremely thankful for even that.
------
TamDenholm
I've no idea if this is normal behaviour but i highly doubt it. They should
realise that sometimes investments dont work out and thats what happens.
If you're open to it then allow them to invest in your new company, but dont
give them it for free and dont discount it either, otherwise you're letting
yourself get taken advantage of.
------
mahmud
I, personally, would give them something before they even asked. Not because I
am required to, but because I want them around me longer.
The people who believed in you when you were less capable and poorer are the
ones you want around you when things are better.
I am still friends with the guy who gave me my first $100 as a programmer,
more than 13 years ago.
~~~
babeKnuth
Nice try "Old investor" in throwaway16's original company. We know it's you.
------
teyc
It is never ever about the money.
Offer them a small bonus, but not at full 20k value.
If these are experienced investors and you want them around, let them know you
value them, but as much as you want to give them a bonus, you want them to
know that they realize you value the relationship more than the bonus you are
able to offer, as a "recognition" and "thank you" for continuing to have faith
in you.
Tell them you don't care whether this is the norm or not. But you prefer that
it is not the norm, because you want them to feel good about this, and not
because it is an entitlement.
Then apologize that you couldn't offer more.
|
{
"pile_set_name": "HackerNews"
}
|
Show HN: Stacks – share your MOOCs with friends - johnnyodonnell
https://stacks.courses
======
niuzeta
So this is like Goodread/aggregated recommendation engine for online courses?
I like this idea a lot. There are way too many resources nowadays that it's
hard to pick and commit to one[1]. Barry Schwartz's Paradox of Choice talks
about this phenomenon extensively - when there are too many options to choose
from, the abundance itself becomes hinderance to actual action, because the
perceived opportunity cost multiplies by the number of options.[2]
If enough people with enough interest updates this database I wonder what kind
of insight you'll get - for example, I see many people mentioning Andrew Ng's
Machine Learning Coursera course, but is it truly the "best" or are we simply
seeing the selection bias?
[1] I have this one link bookmarked to illustrate my point to anyone who would
listen -
[https://www.reddit.com/r/learnprogramming/comments/55npda/he...](https://www.reddit.com/r/learnprogramming/comments/55npda/heres_a_list_of_520_free_online_programmingcs/)
[2] [https://www.amazon.com/Paradox-Choice-More-Less-
Revised/dp/0...](https://www.amazon.com/Paradox-Choice-More-Less-
Revised/dp/0062449923/) if anyone's interested.
~~~
johnnyodonnell
Yes! Very much like Goodreads for online courses. However, at the moment this
is best used to keep your friends up to date with what courses you're taking
rather than helping you find which courses to take. There are already a couple
good resources out there for helping you find the best course to take (ex:
CourseTalk ([https://coursetalk.com](https://coursetalk.com)), Class Central
([https://class-central.com](https://class-central.com)).
------
kingod180
Cool idea! It'd be nice if there was a feature to have your updates
automatically publish to Twitter or Facebook.
|
{
"pile_set_name": "HackerNews"
}
|
Prototype glasses help the visually impaired avoid obstacles - KD12
http://www.engadget.com/2012/05/29/prototype-glasses-help-the-visually-impaired-avoid-obstacles/
======
jonmrodriguez
Similarly, an app for either AR glasses or camera glasses that pair with a
phone should be able to help totally blind people navigate by providing an
interface like "virtual sonar", where headphones play sounds in which pitches,
amplitudes, and doppler effects encode the direction to the destination and
the bearing and distance of each nearby obstacle or danger.
~~~
jonmrodriguez
If someone else doesn't beat me to it, I'll release this as an app for the POV
camera app glasses my startup is making, <http://kck.st/redefine-reality>
|
{
"pile_set_name": "HackerNews"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.