• elven girl from nft collection celestials
  • elven girl from nft collection celestials
  • elven girl from nft collection celestials
  • elven girl from nft collection celestials
  • elven girl from nft collection celestials
  • elven girl from nft collection celestials
  • elven girl from nft collection celestials
  • elven girl from nft collection celestials
  • elven girl from nft collection celestials
  • elven girl from nft collection celestials
  • elven girl from nft collection celestials
  • elven girl from nft collection celestials
  • elven girl from nft collection celestials
  • elven girl from nft collection celestials
  • elven girl from nft collection celestials
  • elven girl from nft collection celestials
  • elven girl from nft collection celestials
  • elven girl from nft collection celestials
  • elven girl from nft collection celestials
  • elven girl from nft collection celestials
  • elven girl from nft collection celestials
  • elven girl from nft collection celestials
  • elven girl from nft collection celestials
  • elven girl from nft collection celestials
  • elven girl from nft collection celestials
  • elven girl from nft collection celestials
  • elven girl from nft collection celestials
  • elven girl from nft collection celestials
  • elven girl from nft collection celestials
  • elven girl from nft collection celestials
  • elven girl from nft collection celestials
  • elven girl from nft collection celestials
  • elven girl from nft collection celestials
  • elven girl from nft collection celestials
  • elven girl from nft collection celestials
  • elven girl from nft collection celestials
  • elven girl from nft collection celestials
  • elven girl from nft collection celestials
  • elven girl from nft collection celestials
  • elven girl from nft collection celestials

Blog

Blog

Blog

Blog

Embedding Human Personality

The year is 2019. I have just begun my first real job, at the ripe age of 26. After several job applications sent following an inspiring 26 hour round trip drive navigated using only highway signs between Cincinnati and Boston that culminated in attending an M.I.T. VR hackathon where my team attempted to use a computer vision ensemble that could predict heart rate by simply looking at a person’s forehead, and display that above the person to a wearer of a Magic Leap looking at them, I had got the interest of the prototyping department of Singularity University, a non-profit created by Ray Kurzweil himself, and right across the street from Googleplex. It was not a prestigious company, but the position was uniquely awesome. I was allowed and encouraged to explore and create demos for any weird technology I could get my hands on, to inspire the c-suite executives who otherwise were rarely (outside of the cinema) in close proximity to the ideas of minds as mildly ill as mine. Ideas like, can AI’s replace human capacity for artwork? Can we teach people at optimal learning rates dangerous and advanced skills using only VR and a BCI? What does it mean to interpolate between human faces? The last one had caught my interest, and during that job I had spent several weeks running StyleGAN and endlessly interpolating, only a tiny step at a time, between randomly chosen faces. I figured if this was already possible, nearly any kind of data transformation capability would soon follow.

Skip forward through several of my chronic issues with employment (the charitable explanation for this is I'm a bit on the spectrum, more than a bit manic depressive, have a high resting heart rate leading to stress or aggression, and was modeled bad behavior as a kid. The uncharitable explanation is I have antisocial personality disorder and am just an asshole. My friends are aware of both explanations, former employers I imagine tend towards a third, that I’m a mean spirited, bad employee. It is worth adding I also have issues with consistent productivity. The charitable explanation is I was abandoned by my parents for boarding school and college, did lots of drugs to cope, and messed up my life enough that I have trouble believing in myself anymore. The uncharitable one my mom prefers is that I'm lazy like my dad. Given how lazy my dad and granddad, who inherited enough money to retire at 25, and immediately obliged himself to do so, it is a reasonable explanation with the merit of simplicity as well) and I have just finished (been asked to leave) my startup that had been doing image generation before stable diffusion had even been released. I would know, I had been keeping track of the developments on it and hung out with the stable diffusion authors at the conference they released the paper at. To me, the idea of mapping descriptions of words to images was the most spiritual experience I had ever had outside of doing psychedelics. In fact, reading ML papers (especially ones related to image and 3D model generation) from 2019-2022 felt like a spiritual practice. I would print them out in little booklets (an idea that had been first forced upon me when it had been an option to print in this style at Japanese 7-11 printers in Tokyo the year 2019-2020 I lived there) and underline and read every little bit that I could.

I could say much more about my time from 2019-2023 falling in love with generative AI, seeing every twist and turn as it took shape, and the shocking yet expected change I witnessed as my niche hobby that maybe only 1000 people in the world were into in 2019 turned into the biggest topic on everyone’s minds. I somehow always knew it would be that big, though it made me sad to lose my special secret alchemy. Instead I want to talk about what’s next.

CLIP made it possible to have a regime to map words to images, and create an embedding space that allows one to get a sense of distance along many axes of direction between any given words and images. Embedding spaces are magical, and still under explored. I find the text and even image outputs rather useless compared to the value of AI models showing us the real wisdom they’ve learned consuming more valuable information that a single human could ever absorb in a lifetime. The most beautiful thing about embedding spaces perhaps is that you can create geometric rules that describe intuitive things we learn about the world. The most common example is that the embedding numbers for the following words roughly follow the property of king-man+woman=queen.

One lesson I learned after meeting the Midjourney team and the founder of my own team (Vizcom, Jordan) is that I was not in love with image creation so much as I was in love with things that could map between data. I was in love with embedding spaces. But to me, creating images isn’t what I cared about most in this world. Maybe it is my own personal struggles with socializing, maybe it is my deep desire to be loved mixed with my behavior that seemingly shows me doing everything I can to not be loved by some, maybe it is that i’m a monkey hard wired to care about relationships with other monkeys, maybe it’s my religious upbringing making me focus on the souls of humans as the only meaningful thing, whatever the cause, what I care about a lot is human relationships. Big ones, small ones, complex ones, weird ones. I love the idea of figuring out how humans interface. Most people interested in this I think are way less autistic, and way more emotional than me. I consider my dispassionate obsession with the topic to be an edge, and a unique perspective. Most people i’ve met like me seem extremely disinterested in human emotion or relationships, usually they focus on pursuits more explicitly about engineering or economics. Of course, I can’t help but see my passions for economics, governance, and history, as just extensions of obsession with human relationships. I think of so many interactions in terms of game theory. At the same time, I love when people break expected behavior, when they act irrationally, when they show mercy or forgiveness or acceptance. The most beautiful discontinuity in human relationships all seem to boil down to when people show love.

If only we could understand human relationships.

If words are the descriptive semantic building block and the distribution of generated images the instantiated output, then human personality, current state (life circumstances), and social graph would be the building blocks, and the probability distribution of relationship outcomes between the people in the graph would be the output. This is what we plan to work on at Love Labs, human personality embedding and relationship outcome probability distribution prediction. Our techniques and (hopefully) results will be shared in forthcoming papers. If you feel this is perhaps the most important thing you could be working on at this juncture in the universe, and feel you could benefit from collaborating, please feel free to reach out and contact me. My girlfriend and co-founder will be starting her masters program soon, and we will be publishing over the next two years as we run our startup as well (7 day in person relationship matching workshops between people from different countries). Sorry if the suspense above left you expecting more answers here, but we are just getting started with this journey. Thanks for reading, I hope it was interesting.

Love Labs