wittyandcharming:

muchadoabouttruffles:

Okay, just hear me out for a second.

Muggleborn kid with a talent for magic. Not real magic. Like, sleight of hand magic. And then a prefect catches them doing something like making a ball appear to vanish or whatever, and just loses their shit because this 11 year old kid has utterly mastered Vanishing Spells and what the hell how is that even possible.

#what’s that behind your ear professor snape #detention (x)

xeniawarriorprincesa:

lesbianmarajade:

we need to stop idolizing tom holland immediately. yesterday, actually.

it’s really cool that he’s this seemingly progressive sweet young dude but i saw a whole post about how he’s “the perfect example of non-toxic masculinity uwu” and another post gave him credit for trans-coding peter parker like “coding” stuff in homecoming wasn’t basic “ha ha misgendering is funny” stuff or misogyny and also stuff that’s in the script which he did not write and should not get credit for.

like………. he’s great. watch his interviews, enjoy his performances.

but also he’s like 21??? im 21 and im incredibly stupid all the time. tom’s also gonna be incredibly stupid because like……. people in their early 20s….. are stupid…. and say stupid shit…. and if you hold them to an impossibly high standard…. they will fail you…. no matter how good their intentions are….

just. cut it out early before this turns into another idolization-to-hatred cycle.

In summary: please be fucking normal

The Visual Chatbot

lewisandquark:

image

There is a delightful algorithm called Visual Chatbot that will answer questions about any image it sees. It’s a demo by a team of academic researchers that goes along with a recent machine learning research paper (and a challenge for anyone who’d like to improve on it), and its performance is pretty state-of-the-art, meant to demonstrate image recognition, language comprehension, and spatial awareness.

However, there are a couple of interesting things to note about this algorithm.

  1. It was trained on a large but very specific set of images.
  2. It is not prepared for images that aren’t like the images it saw in training.
  3. When confused, it tends not to admit it.

Now, Visual Chatbot was indeed trained on a huge variety of images. It can answer fairly involved questions about a lot of different things, and that’s impressive. The problem is that humans are very weird, and there are still many things it’s never seen. (This turns out to be a major challenge for self-driving cars.) And given Visual Chatbot’s tendency to react to confusion by digging itself a deeper hole, this can lead to some pretty surreal interactions.

image

Another thing about Visual Chatbot is that most of the images it’s been trained on have something in them – a bird, a person, an animal. It may have never seen an image of just rocks, or a plain stick lying on dirt. So even if there isn’t an animal there, it will be convinced there is. This means this bot always thinks it’s on the best safari ever. (For the record, it thought the stick lying on dirt was “a bird is standing on a rock in the snow”)

image

If you ask it enough questions, could you get an idea of how it made its mistakes?

image

An algorithm that can explain itself is really useful. Algorithms make mistakes all the time, or accidentally learn the wrong thing. This particular algorithm didn’t have trouble with hallucinating sheep like some other algorithms I tested. But it did have similar problems with goats in trees, and now I finally got to ask why.

image

[Goat image: Fred Dunn]

Upon further questioning, however, it also decided that dogs also have horns, and birds do not fly. Actually, it turns out that a lot depends on how you ask the question. The answer to “do bunnies fly?” is “no”, but the answer to “can bunnies fly?” is “yes”, so either the algorithm is answering a lot of these questions at random, or bunnies *can* fly but choose not to. (The construction “Do <blank> have <blank>?” seems to almost always result in a “yes”, so I can report that yes, bunnies do have spaceships and lightsabers.)

image

So I wouldn’t necessarily believe Visual Chatbot’s answer to my question about the zoo rocks thing. In fact, it seems to have learned to give explanations that are total lies – if it doesn’t know the color of something, it’ll answer “it’s a black and white photo so i can’t tell” without realizing that this excuse only works on an actual black and white photo.

It’s too bad this is so tricky. Since algorithms can often be biased, it would be great if we could ask them  “Why did you show me that ad?” or “Why did you decline my application?”. But getting a sensible answer from them may not be all that straightforward, especially if they pretend they know more than they actually do.

Bonus material: Visual Chatbot explains the plot of The Last Jedi. Enter your email and be edified (contains spoilers – sort of.)

image

mexicanheaux:

mexicanheaux:

If you live in the socal area and are/ know someone undocumented please be careful when going to Walmart or to be safe just don’t go in general ICE has been known to go in there

This isn’t information that can sit in your likes guys I’m not trying to guilt trip but this is life or death please reblog and spread

strongbadgmail:

strongbadgmail:

folkdad:

pro tip, u do not have any banter about chip cards that your cashier hasn’t already heard just do not say anything about the chip to your poor cashier, if u even think about saying “it’s different everywhere you go!” theyll hope u die

don’t ever banter with a cashier. they want you to die as soon as you walk in

being on register is like playing a game where youve heard all the possible dialogue already and youre just smashing buttons so the dialogue goes away faster

missmissie:

phlora:

kids wouldn’t hate vegetables if adults didn’t undercook and underseason them

*people in general wouldn’t hate vegetables if they didn’t undercook and underseason them

Also kids are more sensitive to bitter tastes so they don’t eat literal poison, that doesn’t help.