evan's thoughts


The PlayStation 6 Does Not Exist

When the PlayStation 5 launched in 2020, I was genuinely excited. Maybe it was because it felt like Sony was trying to do something unique in the space with its controller and design. Perhaps it was because the Xbox being released along side it felt more like an iterative upgrade to the One X I already owned, and the PS5 would be a large jump from the launch PS4 I was still playing Final Fantasy 7 Remake on. Regardless I bided my time and went through many online checkouts before I received my PS5 sometime around March 2021. I loved my PS5 at the time. Even when I bought a Series X I still preferred the premium feel and design of the PS5, especially its brilliantly designed new controller. However, take that new UI away and the PS5 is just a nice remix of the PS4.

The thing that’s different about this generation compared to previous ones is that they’re still making PS4 games. The PS5 Pro already came out, it’s 4 years since the original PS5 launched, and most games still also come out on the PS4 as well. Most indie games release on it, all sports games release on it, and even Call of Duty still releases on it! By this point in the PS3’s lifecycle it received its last CoD game, Black Ops 3, which released without a campaign mode on the platform. This was two years after the PS4 released. A leaker back in 2023 released documents showing Sony’s internal numbers, and the results shocked a lot of the gaming world. While it appeared that the PS5 MAU was growing month over month, 75% of PSN’s MAU was logging in from a PS4. This obviously was going to grow over time as the PS5’s supply issues alleviated, and it did, but Sony confirmed earlier this year that half of PSN users are on PS4. So why is this happening now, when it never has in the past?

First off, for the casual audience that plays those major games (Call of Duty, sports games, F2P shooters / mega malls like Fortnite / Roblox / Genshin) isn’t the type to really care about visuals to an extent of spending all that money on a new console immediately. The games still work, and they’re still spending money on them. Importantly, this is the reason all these major developers will keep supporting the platform, so it’s a bit of a chicken and egg problem. Second, is the architecture has remained the same between generations. The PS4/5 / Xboxes all use relatively standard AMD x86 CPU’s and GPUs, and share more in common with their gaming PC counterparts than their predecessors. This makes it much easier to target software across multiple generations than it ever was. This combined with proliferation of off the shelf engines in the industry such as Unreal and Unity takes a lot of the work out of targeting different systems.

The third thing is diminishing returns. I bought a PS5 Pro at launch because my PS5 is my favorite place to play games, and I wanted a thing that ran FF7 Rebirth a bit better. It arrived in the mail, I turned it on, and it sure just does that. As a recovered PC gamer, I have never had a console feel more like just a GPU upgrade than this thing. At the end of the day that’s what I wanted from it and I am happy with the money I spent, but I can’t recommend this system to the majority of people. However it catalyzed a thought, that the PS5 Pro is the PlayStation 6. This is what consoles will just become.

The Xbox One / One X / Series S / Series X are sort of the same exact box. They’re different in various ways, the original One launched in 2013 feels slower to navigate these days, and they output different resolutions / frame rates, but they are all the same box. They run the same operating system software (minus quick resume / dynamic backgrounds on the old boxes). They run the same games and use the same cross save system. I can play a game on my Series X, go to a Series S, then a One X, then a One, and the save will transfer between all 4 seamlessly with zero issues. They all support cloud streaming the games. This made getting a Series X feel like upgrading a PC or getting a new iPhone in the mail than moving to a new “generation” of consoles.

For a while I used this as a negative against the new Xbox, but in hindsight I used this to ignore the many puzzling decisions Sony made this generation. For one, PS5 games only work with dualsense controllers. There is no smart delivery / cross sync between PS4 / PS5 saves besides what developers manually implement. It seems like Sony has gone out of their way to make the experience of owning both systems worse in order to make the PS5 feel more different than it actually is given the diminishing returns of the hardware. They’re clinging onto this idea of a “generation” which no longer fundamentally exists as a marketing gimmick, and for the most part consumers understand this. It’s why there’s more people playing on a base PS4 right now in 2024 than own an Xbox period.

So what do I mean by “The PlayStation 6 does not exist”? Obviously Sony is going to release a box called the “PlayStation 6”, because they’d be dumb not to. What I mean is that the idea of a “PlayStation 6” is a fantasy. The next PlayStation might look very different physically than a PS5, it might have a completely different interface and revamped controller, but in reality it will just be a remixed PS5 Pro, and consumers know this. The idea that the PS5 Pro will be obsoleted once the PS6 launches is also a fantasy. The PS5 will probably continue to receive a majority of games released well into the mid to late 2030’s. What I mean by this, is this idea of a console generation is dead, regardless of whether Sony wants to admit it. Future game consoles will be bought and upgraded the same way all consumer technology is, incrementally every year or two. Developers will target all these boxes simultaneously, because they already are.

Even without PC / Switch, most games still release for the PS5 / PS4 / PS4 Pro / Xbox One / Xbox One X / Xbox Series S / Xbox Series X, and thats excluding the PS5 Pro. You buy the box when you want to upgrade, and when you open it your games will just run a little better depending on how long you waited. That’s it. Just like a phone, just like a computer. Eventually after 12 or so years you might stop seeing games for your old box, just like your old phone loses support or your gaming PC can’t play new games without a GPU upgrade. The “generation cycle” though? That’s just over.

Keeping Things Federated

With the decline of the website FKA Twitter, there’s been a rise in federated social platforms. In short, the idea of these platforms is that you can follow people from different sites and have one place you see your content. E.g. follow your facebook friends on Twitter, see Instagram reels in your TikTok, etc. A follow up idea is that there are networks like Mastodon / BlueSky, platforms that are free / open source, and allow anyone to run servers that host smaller micro communities. If you open Mastodon, you’ll notice that while most people have a username that ends in @mastodon.social, some have others! Mine is @evanhirsh@hachyderm.io, a mastodon instance focused more on Software Engineering. Hachyderm is a free instance of Mastodon, and makes all its money off donations. This is great when it works, but it doesn’t always work.

Social networks are already an unprofitable business within the framework of capitalism. You have to host text, photos, and videos in perpetuity for all your users. Even if I had a Mastodon instance with 2000 weekly active users and froze sign ups, my costs will grow every single month even with S3 object storage if my users do not turn on auto delete. This is not sustainable for small donation only instances. This year already we’ve seen the closure of many instances (mastodon.au, mozilla.social, botsin.space) and spinoff anticapitalist networks (Cohost) due to a lack of funding, in both the literal and non literal sense of the word. Over in Bluesky world, we have its benefactor / cofounder Jack Dorsey leaving the board, leaving them to scramble to raise funding from Blockchain Capital. If this continues, this fledgeling idea of “federated social networks” risks dying out to an open protocol that is functionally owned and maintained by a few large parties which seek to benefit from the veneer of portability / openness.

So, i’m gonna say something rash. If you clicked onto this post from Mastodon or even Bluesky, please brace yourself for a second because this might make you angry, but please please give me a chance to explain myself here.

I think Mastodon (and Bluesky) need to implement some form of advertising (and subscriptions) as a revenue path. Let me clarify.

Donations are not enough. All users, active or not, cost each Mastodon instance dollars a month to host. If you are not giving every instance you have content on 5 bucks a year at minimum, you are not covering your hosting costs, plain and simple. To fix this, Mastodon explicitly needs to implement into the project itself systems that allow admins to monetize their instance. Here are two of the ideas I had for how these networks should implement funding in a way that is both ethical and respects their users.

Plug and Play Ads

Assume I am a Mastodon admin. Most of my users are free, and they do not pay for donations. One option that can exist is for me to turn on advertising posts on my instance. By default, there would be a network provided by the Mastodon project with all the advertisers which wish to sell ads on Mastodon. These ads would be across all instances in the network, and not target explicit groups by using information such as age or location. If I don’t like the default Mastodon ad network, I could plug in a different one that has a different revenue split or advertisers. If my users get annoyed with the frequency, I could change it to be 1 ad every 20 posts or so, which gets me less money but makes them happier. So then why would any user stay on my instance? They could go to another one without ads, etc. Well you’ve read this post long enough so I assume you aren’t just going to go back and yell at me online in bad faith, so here’s the kicker! Users can just turn the ads off. Third and first party clients both will have this option! Users that don’t want ads are going to block them anyways, so being advertised to is explicitly a way for users who for whatever reason won’t or can’t donate to ensure their favorite place stays alive. If you turn that off, and you aren’t giving money, you’re just sort of an asshole moocher! That’s fine if you want to be, but that’s your choice. Admins could also

Just have users pay for things

One of the ways to solve the growing data under 2000 user’s problem I talked about is sort of how Slack does it, which is to just delete posts after a few months by default. A way that Mastodon instances can help keep costs under control, is to make it so that if you want to keep your older posts, you need to pay like a buck or two a month. Many users prefer having their posts stay ephemeral anyways, and it’s not like they are portable across instances themselves. I see this as a pretty reasonable way to force users to help out with the costs of hosting these things. A cool other option might be to literally offload the server cost to the users. Built in cost calculation based on storage space and API usage that just gets billed to the user is pretty transparent, and probably the most ethical / non profit way this could be run. If a server enables both this and ads, you could have it auto shut the ads off when someone is paying! There could be a bar visible in the UX and the API’s that show how much funding your server has, kind of how Wikipedia works, and even donations could give little perks such as a special donor badge or something (similar to how Signal works!).

If you read this far you either agree with me or are angrier than anyone else has ever been. I respect it either way. The thing is, I just want this project to be successful, and every single time I see an instance close down I get a sinking feeling that this project is just not working. I have wanted to start my own instances multiple times, whether it was for esports, software, or Seattle, and the entire time the cost of maintenance / upkeep was the thing preventing me from doing so. The way things are going, if we don’t turn around soon and prioritize this as an issue, we’re going to lose even more servers, communities, and importantly, posts (since they are not portable on Mastodon). I want Bluesky to gain funding from normally without having to resort to further funding rounds from VC’s that make everyone freak out each time, given its impossible to do so otherwise without “AI” in your name these days. Importantly, i’d rather these platforms be funded ethically with systems similar to the ones I proposed rather than fully die out and be replaced with the social networks of old again. There’s a chance here to change things for the better, away from the way things used to be. If we ignore this entirely because of a bunch of screaming people that pirate indie games, then we’re gonna be back in the world of data collection nightmares.

Less is More

I think if you offered me the choice to be more immersed into a computer at any point in my life, I’d probably ask where I could sign up and how. I spent years dreaming about future more immersive computing experiences, where the computer / digital world can replace and surround my physical one almost exclusively. I am doing much better now and don’t think this way anymore haha, but all this to say the Apple Vision Pro was everything I ever wanted in a computer. To an extent it still is. I remain one of the few users actually putting the device on most weeks, if only to watch TV. As much as I adore my Vision Pro, almost seven months into ownership I have found it is not the life changing device I wanted it to be. The Meta Ray-Bans are though.

I received a pair of Meta Ray-Bans back in April as a gift from my dad. He works in construction as a consultant, and a contractor he works with gave him a pair. He didn’t really have much use for this, so he gave them to me (after I asked lol). I wanted to mess around with a pair but I wasn’t willing to spend 300 bucks on a device that only replaced existing use cases (ironic given the Vision Pro, I know). I first received them when we met in Sedona for my cousin’s wedding, and for the rest of my time there I just could not take them off. They were unobtrusive but intuitive, and have become maybe my favorite product I use on a regular basis since my Apple Watch.

Let me take a step back and explain why, but to do that I have to explain exactly what these glasses are and what they’re good for. Meta has done a decent job of showcasing the camera aspect of these glasses just in general. If you use Instagram semi regularly, i’d be shocked if you haven’t seen a story that says “Taken with Meta Ray-Bans” at the top. They are great at taking candid photos, and recording short vertical videos within optimal lighting conditions, such as well lit interiors or any time outside during the day. While not as good as a modern phone camera, the quality loss is more than acceptable enough given the convenience factor of the camera itself. What I mean by this is that, when you are wearing these things you are often taking photos you otherwise wouldn’t due to the convenience and accessibility of them being on your head. It’s less a discussion of “my phone does this better”, and more “if I only had my phone I wouldn’t have taken this to begin with”.

The camera is not by any stretch excellent though. The (remarkably well designed / organized) Meta View app does a lot of processing on the images similar to most phones, to the point where the images themselves look as if they’re shot on an iPhone X or similar quality phone, but the camera can not handle any form of low light at all. This is expected, and is the main time I end up taking my phone out of my pocket. Wearing them, I find myself reaching for my phone similarly to how i’d reach for my DSLR or mirrorless (especially on the iPhone 16 with its camera control button), when I want to get an excellent / fine tuned shot of something rather than every time I want to take a photo. However you probably know all this, it’s what Meta shows off. There’s almost no comparing them to the camera of a modern iPhone though, or even any modern budget smartphone. I’ll always bring my phone with me to take photos, even though I have a cellular watch. I stopped bringing my AirPods though.

I don’t have a car anymore. I ended my Model 3 lease for a bunch of reasons 2 years ago, and live car free in the Capitol Hill neighborhood of Seattle, basically one of the only areas of this city it’s feasible to do so. Wearing my AirPods when I walk / bike around to listen to podcasts is great, and makes them an even more essential part of my everyday life. Their transparency mode is great and their microphones are excellent, but its just not comparable to wearing nothing over your ears at all! The “hidden” and killer use case of the Meta Ray-Bans are the speakers. They have two tiny speakers that shoot straight at your ears, allowing you to listen to anything from your phone without any form of earbuds or bone conduction. This allows you to speak to its voice assistant (which is just decent by the way) read text messages, listen to music, podcasts, take phone calls etc. It’s very hard to hear what someone else is listening to, unless you blast them to max volume and are in a very quiet room close to them. This is the killer use case for these things, as essentially a pair of headphones built into your glasses. It’s a lot safer for walking / biking in a city as I can hear a car better than any passthrough will offer me, and walk into a coffee shop / talk to someone without taking them off my ears.

The Ray-Bans are part of Meta’s strategy to offer low cost consumer devices with really solid product stories, ultimately building to a commercialized version of their Orion Prototype. Their goal is to take individual features they expect you’d use in the Orion if you were wearing it every day, chop those out, and sell them as a fully fledged end to end product that you can buy today. The camera, the audio, and the voice assistant are 3 of those features versus the hundreds if not thousands that i’m sure can exist in the Orion. Importantly, the operating system Meta has shown in Orion currently does not exist, either on the Ray-Bans or their Quest headsets, which perform a similar function but with gaming / entertainment instead.

Apple has the opposite strategy. They have built what is an incredible feat of engineering with visionOS, and their goal is to build products that fully commercialize that entire operating system today. Every app, every use case that will exist in an eventual Orion equivalent from Apple can be built today in the Vision Pro. Because of this, it feels to many that the Apple Vision Pro exists to be a developer kit for an eventual “Apple Glasses” rather than a complete product with well thought out user stories. In my last full post, I said that I believe visionOS to be half a decade ahead of Meta’s Quest OS (which has now been renamed Horizon OS). The gesture / eye tracking control, windowing system, and native SDK’s blow the Quest’s makeshift Android out of the water, especially if we’re targeting Orion as the end goal. My opinion has not changed here, but after experiencing the other half of Meta’s product strategy, I believe this was a deliberate decision they made rather than a simple oversight or being “less innovative”.

That might not matter for Apple. The iPhone released and was one of the most disruptive and successful consumer products in our lifetime. Regardless of whether or not you have an iPhone, you are using one of its descendants. It is a product that was so successful, that society has reshaped itself entirely around it and its capabilities. If I were Apple though, I would still be a bit worried, if only because we are not yet in the “iPhone 1” phase of this market. If we look at AR glasses as analogous to the iPhone, everyone rushing towards this product are currently building Blackberries and Palm Tréo’s, waiting for the supply chain / tech issues to resolve themselves so they can sell these devices for under ten thousand dollars. In that analogy, Apple just revealed their plans for Multitouch in 2004 or so.

The iPhone had an actual year head start over Android by the time it launched, and even when the first major Android phones started to hit the market, they were years behind in terms of usability and interfaces. They only managed to fully catch up to the iPhone consumer experience in terms of speed, performance, and design almost a full decade later, and now only excel in a few areas over the iPhone (notification management being an important one). What I mean to say is that if this space is actually transformative, Apple and Meta are going to release the iPhone equivalent at about the same time. They’re going to use the same control methods (Meta has apparently retooled their neural sensor band to use the same tapping gesture as visionOS instead of a phantom limb) and they’re going to have the same UX conventions with windowing and app design. At Meta connect, they showed they are finally adding Android app support to the operating system, starting the process of plugging the gaps between the Horizon OS / visionOS SDK’s.

We’re many years out from the actual product versions of these devices. When I used the Apple Vision Pro for the first time I was floored. I went home with mine absolutely ecstatic, and I still really enjoy using the product if only as a way for me to cosplay what the future of computing is like. However the product is not improving. If anything, it’s actively becoming worse. 50% of what I do in this device is watch YouTube videos, because I can’t corp manage it to do work / take Teams meetings from it, and I can’t use more than one monitor on it with my Mac (I know ultra wide is coming). Without work as an excuse, I have been using it almost exclusively as an unbelievably overpriced bedroom television / airplane monitor since I purchased it. Now the unauthorized YouTube app I use is going away, without a release date for the official one in sight, and the unofficial PlayStation Remote Play app keeps stuttering because of the way Apple devices handle communicating with each other. Meanwhile almost every day I find more use cases for the Meta Ray-Bans. These aren’t necessarily competing products today, but before the end of the decade they will be. I believe the Apple Vision Pro released relatively early in this race so that Apple could garner a more robust app ecosystem compared to Meta, by utilizing the strengths of iOS to their advantage. Developers have not responded in kind.

Post Meta Connect and my experiences with the Ray-Bans, I believe the two companies are a bit closer in this eventual category than they were back in February. If you had to ask me which of these companies devices I am putting on my face every day though, unfortunately it is not the Apple product. Meanwhile the Ray-Bans are so great at what they do that I bought myself a second pair. Hopefully, Apple learns from this and releases a real competitor, and not just AirPods that can take photos.

I love deadlock so much. 150 hours in since I was first given access earlier this year. My favorite game since Overwatch.

The ReMarkable Paper Pro looks amazing and i’d totally upgrade from my 2 if they added a digital book store. Would add tremendous value to the device and finally make me ditch my kindle

Anyone gonna be at PAX West today? Will be wandering around the show floor for a bit

Neat, I have a way to cross post easily to both Bsky / Mastodon using micro.blog now! I’ll absolutely use this in the future for thoughts, especially once Threads is implemented.

Reality Check: Where The Vision Pro Is Going

Is it the Mac of the future or just a new iPad?

Last weekend I went to the Seattle Apple Store in the University Village to pick up my Apple Vision Pro. I have waited for this device since the first rumors of the project sprung up years ago. Those rumors, along with Meta (then Facebook’s) purchase of Oculus, and the eventual release Meta’s infinite office demo made it incredibly easy for anyone to understand the vision of what the long term vision both companies were. A spatial computer, an infinite canvas free of the confines of a screen. The Apple Vision Pro, while by no means perfect, is the closest we have come to realizing this future so far. To me it feels almost like the dream device I have been waiting my entire life for, one that brings the experience of a computer out into the real world. Every morning I wake up, having just finished a dream, and the headset somehow doesn’t disappear from my reality. It feels like a device that shouldn’t exist, and yet it is in my life at last. The computer I have always wanted, the device that pulls the experience out of a screen and into the world around me. I am able to litter my apartment with windows and views into the digital world, a world I hold dead. I love computers. I always have, I always will. I just wanted to write that before I spend the next 2.5k words admiring and criticizing this device I truly adore more than almost any piece of technology I have bought for the past decade.

I wouldn’t go so far as to call myself a VR expert, but I have been an enthusiast for over a decade at this point. I used the first Oculus DK1 the month it released, their Crystal Cove prototype, purchased a CV1 at launch, touch controllers, Oculus Go, Quest 1, and Quest 3. This is not to state that I am the only one able to talk about the Vision Pro in a critical way, or that I am the best to analyze it, but rather to just explain the previous experiences that set my expectations. In many ways The Vision Pro is a huge leap above them, and in a few surprising ways barely an evolution at all. The screens are the story here. I believe the Apple Vision Pro is the “retina moment” for XR devices. Unless I specifically try my best to look, I can’t see any pixels at all. This barely puts it under the “retina” flag, and I believe Apple has even done their best to avoid using the term themselves, but the moment I put the Vision Pro on echoes the moment I saw my first retina Mac. While the resolution of the device is not as clear as my iPad or my Studio Display, you’d have to closely analyze text on the device in order to notice much of a difference. Additionally, keep in mind that while each eye has its own 4k screen, you are never going to be utilizing all of those pixels to watch a film, so all your movies are actually streaming at sub 4k resolutions. Better than almost any laptop / tablet you can find on the market for sure. While it’s not technically better than a 4K OLED, and I found myself not noticing much of a difference between my TV and the Vision Pro due to the “distance” the virtual screen was from my face, preferring my TV if only because I don’t have to have a device on my face to use it.

What has underwhelmed me the most so far has been the passthrough. The Verge’s Nilay Patel called camera based passthrough a “dead end technology”, and after a week using this device I am inclined to agree with him. My first thought putting the Vision Pro on my head after experiencing the passthrough of the Quest 3 was “is this it?”. It is very good, better than the Quest by a bit, but it was not the step change I was expecting it to be. Rather, it passes a barrier of usability that the Quest 3 was unable to, but not exceeding it at the same level of the rest of its features. It’s the first passthrough I feel fine utilizing for extended periods of time. While it looks better, It has not solved any of the problems of camera based passthrough in the slightest. The increased resolution of the Vision Pro displays compared to the Quest 3 only highlight how inadequate the cameras on either headset in comparison to the human eye. The biggest surprise for me is that the world is still blurry. You can read text off a monitor, but that text is a lot blurrier than text in visionOS. You’re barely able to read watch notifications, and there’s a noticeable dip in quality when watching my TV through the device. Make no mistake, if you are using this headset you want to avoid using passthrough for anything other than seeing where you are and talking to people outside the headset. Because of this, I believe that camera based passthrough is actually a huge blocker for mass adoption, as it is the main reason I take off the headset. It is a larger tradeoff than I expected for the freedom you gain with the “infinite canvas” as Apple puts it. This is one of the reasons I think it works better as a laptop / desktop replacement. In my home office and at work I am essentially starting at walls anyways, and the benefits of that infinite canvas are clearer to see. Without it though, all I see is my orange cat looking washed out and blurrier than in real life.

There’s no getting around the price though. The vision pro starts at $3500. That’s for the base model with 256gb, which to be completely clear, you should not buy if you intend to download movies. This puts off anyone besides early adopters (me) and software developers (also me) from buying this device. It’s hard to judge the headset without talking about the price. If you disconnect the cost from it, it’s one of the most fantastic products Apple has ever built, both in terms of hardware and software engineering. It is the best way I have ever watched movies, besides my Sony A85J / Sonos surround setup in my living room, or my local IMAX theater. It is the best portable computing interface I have used, allowing me to multitask beyond my wildest dreams for a device I can fit in my backpack. In essence the Apple Vision Pro is a screen with a computer built in, a portable workstation you can use on a plane, a giant flatscreen you can fit in a bag, etc. However you can’t talk about it that way because we live in a capitalist society, and while I am fortunate enough to be a be able to afford this product, most aren’t as of yet. To me, it is worth this cost, but I would put it incredibly low on the hierarchy of technological needs for almost anyone considering what computer to buy.

Talking about what this device can do often seems a little boring, but Apple’s ambitions are super clear when you first put it on your head. The “Hello” cursive is the first thing you see when visionOS boots to the setup screen, along with the familiar Mac chime. The tutorials in the OS that teach you how to interact with it state that touching your fingers together is “like a click on your Mac”. Apple is trying to pitch this as the laptop / desktop of the future, and minus a few changes to the hardware itself to make it lighter and cheaper, I believe they are on the right path. However, functionally, it is an iPad.

People online have made this comparison / complaint about the device, that from a functionality standpoint it isn’t capable of performing any tasks that my iPhone or my iPad can’t, and i’ll break down that argument a bit. I’m not going to argue with it directly as a universal claim because its easy enough to say “well Super Fruit Ninja lets me cut the fruit with my arms instead of my fingers” and call it a day, but I think this circumvents the issue rather than engaging with it directly. In theory, my phone can’t do anything my iPad can either, besides use the Apple Pencil and have multiple apps open at the same time. The Vision Pro may be an iPad on your face, but I remember distinctly when I called the iPad a “big iPhone”. I was right, and ultimately the modern product line resembles the Surface more than the original iPad, but at the time the multitasking / writing functionality of the iPad were not in the lineup. Now that they are, the increased screen real estate, pen support, and window management give me value over my phone.

The Vision Pro, in its least charitable reading of the device as simply an iPad on your face, makes sense in that same context. My iPad while able to multitask suitably for smaller tasks, such as writing this blog post, has many limitations. Regular iPads can only utilize 2 to 3 apps at the same time with iPadOS’s split view, and M series iPads can only use 4 apps at once with Stage Manager (even when plugged into a Studio Display). If the Vision Pro has a limit for how many apps you can have “open” at once, I haven’t found it yet. I’m sure this is a clever slight of hand from the operating system, which from my experience is far closer to iPadOS than macOS, and seems to suspend apps you aren’t actively using in the background. However I have had more than 4 apps operational at the same time in my field of vision which puts the system’s capabilities above what my iPad can do, and my iPad is a device with the same exact system on a chip as the Vision Pro. Since the early 2010s, the goal of companies like Microsoft, Apple, and Google were to build a product ecosystem where software transcended the form factors of devices, and where those devices were designed for specific aspects in your life rather than the software they were capable of running. The app I am writing this post on is on my MacBook, on my iPad, and on my iPhone, universally synchronizing between the three with all functionality intact. I could write this post from my Mac if I wished, but I prefer the portability and focus of my iPad. The idea that using primarily apps which exist on other devices in the Vision Pro disqualifies it as a product category is an outdated and silly one. That’s just not how computers work anymore. I’m sure most of your time on a desktop computer is spent in a web browser and using electron apps that are on your phone as well, but the user interface and input mechanisms on a desktop or laptop are better for certain circumstances (extended periods of productivity).

This is not to say there aren’t complaints with the device and its position within the Apple ecosystem. Apple has been trying to build what they believe is the successor to the Mac for years. Steve Jobs pitched the iPad as the device that would kill netbooks (he was right), and that product ultimately morphed into a Surface competitor with a keyboard and mouse support. The iPad and Mac have been on a collision course for years, with the limitations of the former and the relative complexity of the later making it hard to recommend one or the other. On the one hand, a person might want a lower priced and simple device to take handwritten notes on in school as their main computer, in which an iPad is perfect. On the other hand, they are majoring in a field which requires software that the iPad can’t run, or in the case of computer science, will never be allowed to run. This means entire professions and career paths are locked out of ever using the iPad as their primary computer, without the assistance of either a cloud machine or laptop. One of those professions, is software engineering. Apple does not allow unsigned code to run on the iPad, which includes any code a person would write.

The Vision Pro has the same restrictions. This means that I will never be able to do my job on the headset alone. For the past week I have been using a combination of the virtual display feature to mirror my Mac, and the Windows 365 app in Safari to connect to both my laptop and my cloud PC which enable me to do my actual job. This makes the Vision Pro the most expensive thin client in the world. I’m not particularly upset by this, I knew this was the deal going in and accepted this was a limitation of the device. My plan was always to use my Mac to handle vscode and my headset to handle browsing the web, email, chats, meetings, etc. However I now believe a significant part of the devices value proposition is missing from its functionality. Spending time in the device and experiencing the interface for the better part of a week made me understand just how much Apple needs this to replace the Mac in order for it to succeed in the short term, but even as their primary target for the device I am unable to leave my laptop in my backpack. As of now there is no way to access a native shell in visionOS. There is no way I can download VSCode or any equivalent, and even if there was, no way for me to build my projects offline without connecting wirelessly to a computer running macOS, Windows, or Linux. Even if every professional program in the world gets ported to iPadOS / visionOS, my profession will never be able to complete their work on this platform, even when those in my exact profession were the ones who built the Vision Pro in the first place.

When looking at the competition, Apple’s interface is maybe a decade ahead of the Meta Quest OS in terms of functionality and capability. It has a native window UI framework written in Swift where the Quest falls back to just wrapping web apps. It has an actual windowing system where the Quest only supports 3 windows of different programs next to each other. It allows window resizing where the Quest only has two options (in front of your face and movie theater) and more. It’s so far ahead of Meta right now that if I were Mark Zuckerberg I might be genuinely frightened that Apple could win this space. He’s not frightened though, because he understands the advantage that being an open platform is an extraordinary advantage to have in the race to replace the desktop computer. Once Meta’s operating system catches up to visionOS’s functionality in x amount of years, unless the target use case of spatial computing being for productivity / work changes significantly, the Vision’s hardware superiority will fundamentally cease to matter. If the bet of these devices replacing laptops / desktops is correct, the one with less limitations will always win. It doesn’t matter if the UX is worse, it doesn’t matter if the screens are worse, just look at the current laptop / desktop market for professional use and try to prove me wrong. If Apple’s ambitions are to sell these to offices and companies to replace the laptop, you can’t do that while still remaining a closed operating system with limited functionality. Meta will open the doors to the Quest more than they already have and sell a ton of them because it’ll be capable of running dotnet by then and visionOS won’t, so guess which one i’d have to buy to do my job.

This device is amazing. It has already replaced so much of what I use my iPad for (except writing this post, as Ulysses hasn’t enabled visionOS support yet) and is a genuine glimpse into the future of computing. For that alone, it was worth a purchase for me. It has brought the future so close to my every day life that I will appreciate and cherish it as long as I can. However it is one possible future. I would love to see that future come to pass in a world in which Apple is less restrictive about the use cases of their products, but I have no fear that a software ecosystem similar to the iPad will ever replace the Mac, because it functionally can’t. Maybe it’ll replace the iPad, but i’d hope Apple has more ambition with this product than simply to be an iPad killer. The amount of time, energy, and effort they placed into visionOS makes me think that the ambition was there, at least. I desperately want this computer to be the Mac of the future, but the limitations of the operating system itself make me believe that Apple may be doomed to repeat the path of the iPad. I vastly prefer visionOS to the current Quest OS, but in order to win the future both Apple and Meta are chasing, Apple’s approach towards control of their operating system needs to change a bit. We know Meta’s hungry to succeed here, and if this product category is the real deal, they are willing to do whatever it takes to own it. Apple just needs to give up a tiny bit of control here in order to compete effectively with them over the next decade, but the jury is out on whether that’ll happen. I hope they do that, because if I have to pick right now, this is the OS i’d rather write code on.

My Thoughts on AI Art

I believe that outside of essential human rights (food, water, healthcare, housing, communications, etc) art is the most fundamental and important aspect of life. I have the upmost resect for artists, writers, illustrators, filmmakers, game designers, etc, and I believe that they create the context that helps us find meaning in life. I can’t imagine my life without the art they create, as it gives it flavor and meaning and helps me explore both myself and the world. I love films, watch a bit of television, listen to a lot of music, and play / collect a ton of games. In addition, artists surrounded me all my life. My roommates in college were film students, my best friends from high school are animators, my father is a painter and a writer, and for most of my childhood I was an amateur video editor before anything else (even programming!), working in Final Cut Pro and making my own home videos.

I felt this disclosure was important to write, because I believe there is a rightfully earned stereotype of the “AI Art” tech guy, and i’d like to ensure that I distance myself from them. They’re a “software eats the world” believer to an extreme, and the new rise in generated images / music is allowing them to extend that theory television, music, movies, games, etc. Now I think all art is subjective and there’s no such thing as “objectively bad” but in my humble opinion this usually boils down to a case of nerd brain and exclusively consuming content that does not challenge them at all. I don’t blame them for this, the most popular film franchises on the planet are (in my humble opinion) little more than comfort food, fun distractions but wholly un provoking, existing to validate interests and fandoms and drag the viewer along an endless track like a hamster on a wheel, so if you’re not a cinephile or music snob its easy to fall into a consumption lull. In a world where we define the set of art as exclusively these films and their equivalents in other mediums, I believe that AI could be the future of art. Fortunately, we do not live in that world, and I believe that we never will due to the way both these tools, and the very essence of good art itself, functions.

In American politics, one of the most common grievances of the right wing is that the left “controls hollywood”. Ignoring any other dogwhistles that might lying beneath that statement, I think its pretty fair to say that conservatives are in the overwhelming minority of artists. There could many be reasons for this, their ideological tendency towards hyper capitalism might play a role, kids growing up reading The Fountainhead over and over probably won’t decide that they want to go to film school and undertake massive debt to be an underpaid gaffer if their only motivating factor in life is making the number on their chase app go up. I don’t entirely believe this notion though. For one, conservatives do make art, it’s just very bad art. The art they want to make is also regressive in its nature, rather than pushing boundaries and creating new pieces, they wish to simply replicate existing styles and patterns in the past. Paintings trending towards photorealism and classical marble statues by artists such as Michelangelo plaster their twitter profile icons and banners. “Retvrn” they say, unable to process anything that requires interpretation and context of the struggles which exist in the modern world, for if they could do this, they might have not been conservative to begin with.

DALL-E 3 could easily generate the left image. Maybe even the right. Could it generate what exists in MOMA next year?

I don’t believe that generative AI is regressive or conservative politically, but that similar principles apply in its ability to generate works of meaning. All AI and machine learning, even generative, requires human input to function, both in terms of the data utilized when training the model, and the prompt being provided. Thus, its output exists only as a creative subset of its input. ChatGPT can write a poem in iambic pentamiter as to the lack of exclusive games on PS5 for instance, but it will never generate a new form of poetry. It will never create new genres of music, new styles of animation, new stories based on the context of our world because it requires our set of experiences and interpretations. Therefore it can never surpass or learn from them independently, only meet them. You’ll never get the next Seinfeld through the direct output of a LLM, only an endless mimicry of what came before. A world in which generative AI is the only form of artistic output is a world similar to that of the Matrix. Stuck in the 90s forever, unable to change or grow into something more.

However.

I believe that generative AI, as many advancements that came before it, can be a tool which is utilized to create art, if at an absolute minimum in order to say something about technology itself and its place in society. On the last day of 2023, VXTwitter user donnelvillager posted a response to a tweet containing a photo of renowned artist Keith Haring’s “unfinished painting”. For those (such as myself) who were unaware of the initial work, at the time of its creation Keith was suffering from AIDS, an illness he would ultimately succumb to.

I won’t explain the meaning of this painting to you because I genuinely hope you’re capable of middle school level artistic analysis after the context I provided. Regardless, donnelvillager responded with this.

The reception to the AI generated completion was instant and vitriolic. The top response states that this is a desecration of the original artists work, which given that it does not replace or paint over the original work as a PNG on the internet, is pretty overblown. People soon realized that donnelvillager’s tweet was satire of the “AI Art” tech guy and how they view art / the world. I’d like to propose that that it also makes an incredible argument for the use of these AI tools alone being art in and of itself. My initial thought was that donnelvillager’s work was incredibly reminiscent of the reception of a specific work by Duchamp.

Image provided by the San Francisco Museum of Modern Art

Personally, this is one of my favorite pieces of art ever made, but in a traditional sense, Duchamp did not “make” it. He did not shape the porcelain of the toilet himself. He took the urinal, signed his name, and placed it into a gallery. In the process, “Fountain” is a statement about platforming artists, about fame, and about the very nature of creating art itself. The context surrounding a work in the postmodern sense is also art as much as the text of the work itself. Similarly, utilizing a generative AI tool to strip the original image of its meaning is, in and of itself, creating new meaning and making an incredibly bold statement about the world and the creation. This is art. I never would have thought to utilizing these tools to provoke a reaction from the public by intentionally stripping an artwork of its original meaning, in order to underline the importance of humanity in art, authorial intent, context collapse, and so much more. To me, this proves that these tools have at minimum a limited use in the creation of art.

Pulling back from the whole philosophy art snob stuff for a minute though, I also think these tools may provide assistance to human artists and shouldn’t be thrown out entirely. Either supplementing existing writing, improving the grammar of non native language speakers when creating new stories, allowing students to study classical styles of art by replicating them over modern subjects, and even adding a bit more randomness and interactivity to games and crafted character dialogue to make players feel as though they have more agency. All of these might be terrible ideas, but I believe there are more areas to explore in using these tools to build expressive works.

When I was searching for a photo of Fountain to use in this post, I first clicked on the photo provided by Google’s excellent Arts & Culture tool. Google is of course hot on the generative AI train, building their own image generation tools and large language models, attempting to see where they can be fit into their existing product suite. There I noticed an interesting button. It does exactly what you think it would, and it gave me an idea.

Fountain, by Evan Hirsh (2024, created with Microsoft Designer)

There. A beautiful work of art stripped of its original intention and context, replicating the pattern of a culture long since dead to satisfy the part of Michael Knowles prefrontal cortex which struggles with this kind of thing. In and of itself a parody of “Fountain”, but putting a spotlight on its original meaning by highlighting the absence of it, by creating something that specifically caters to those with limited understanding. This is in and of itself a reason I believe that the future of the “AI Art” guy will never fully come to pass. Even before generative AI, the past decade of film / television has been about directly catering to what audiences want. Reading their tweets to ship characters together and retcon films they don’t like, analyzing their streaming behavior to greenlight entire shows… We already live in a world of computers telling the industry what to make, and while it sucks, it’s directly pushed independent films in the opposite direction. I don’t think a computer could ever make Beau is Afraid, and if one did i’d fully support Sarah Connor flattening it under a pneumatic press. I also believe that the success of major blockbusters from auteur directors such as Barbie / Oppenheimer is in part due to the start of a backlash wave against dull cookie cutter algorithmic art, and I am hopeful that it pushes the industry away from the trends of the past decade towards a more hopeful future. That’ll only happen if you, the person reading this, went out to theaters to see them though. Otherwise, the AI Art guys were always right.

Also watch Blackberry please it was really good

Looking back at tech in 2023

When I started this blog off, one of the lists I wrote was of the tech that made my 2022. It was an incredibly fun and exciting year for me as both a gadget lover and a technophile, with a ton of interesting ideas about how we use both hardware and software. Since I wrote that post I purchased yet another Studio Display (for when I work remotely from my mother’s house), have defaulted to using Arc for almost all my browsing needs and eagerly await both the Windows / iPad app, formatted my MacBook due to the crust of a 8+ year disk image that predated the M1, and sold my Steam Deck to my cousin (incredible hardware, just wasn’t tailored to my needs and couldn’t replace my OLED Switch). Regardless, 2022 was an incredible year for tech in general, and with the LLM boom which started little over a year ago I was expecting incredible things from 2023. Unfortunately this didn’t materialize. Whether it was due to the industry wide layoffs that happened at the start of the year or the economic slowdown that followed, this year felt pretty depressing overall for me as a gadget / app lover. When I opened up The Verge’s homepage each morning as I rolled out of bed, I used to feel a jolt of joy and excitement. I was in an industry that was truly building things, creating products that were exciting and interesting. Recently though, the news has turned a bit south.

The collapse of Silicon Valley Bank along with the self imposed tech slowdown to preempt a recession that never happened seems to have cratered both the creation of startups, and the expansion / growth of interesting products. This combined with the rapid rise of ChatGPT and the ease of implementing some OpenAI calls into their product lead to almost every company implementing some GPT 3.5 text generation feature this year, whether it drafts your notes for you or writes your Word documents (disclaimer: I work on Azure and have used this feature to write Word documents), it seems that LLMs are poised to be the next technology that changes how we interact with programs. I personally believe that this technology will result in some sort of shift with regards to how we interact with computers. Whether it will act as a course correction to the false start of AI voice assistants in the mid 2010s, or lead to the end of my profession as I know it by democratizing software engineering for all, its evident now that this is at least more than the false starts of the previous few years.

Both Cryptocurrency and The Metaverse were the big losers in 2023. For cryptocurrency, I have found it to be a solution in search of a problem, harder to use than Apple Pay by an order of magnitude with no benefit to the regular consumers of financial products. Not to mention the numerous issues in the space that would prevent any casual consumer from ever interacting with the tech in a way that doesn’t abstract away the issues (and therefore the supposed benefits) of the technology to the point where it just becomes just a risky dumb investment. The Zuckerberg vision of a Metaverse seems blissfully unaware of the fact that it already exists for everyone who is either a millennial or younger. The pandemic came and went. We were all forced away from our physical spaces into digital ones, such as Discord, and Meta fundamentally missed its moment to own those spaces, instead chasing a hypothetical future platform on an unproven computing device. The computing device itself though, might bear more fruit for them.

Spatial Computing is a term I somehow stumbled my way into by accident when I was writing (and deleting) the very first post for this blog over 2 years ago. This year though, it finally is shaping up to be the next real hardware / software race to win. After almost 7 years of being the tech industry’s largest open secret, this June we finally caught a glimpse of Apple’s entry into the space, the Vision Pro. Having spent a large amount of time in both my Meta Quest 3 and the visionOS simulator on my Mac, I believe that in terms of software alone, Apple is at least half a decade ahead of Meta. They’ve built a real operating system here, with an input method that doesn’t tire your arms, support for multiple windows, a native UX framework, an intermediary view type between windows and the full immersive app of the Quest, etc. However the price, and whether they’ll be able to bring it down to a point which satisfies consumers, or whether this is a product anyone besides me actually wants, remains to be seen. All I can say is that every time I put my Quest 3 on a friend’s head against their will and ask them to drag a window around a room with their two fingers, they seem to just get what’s coming.

If there’s one through line for tech news it’s that 2023, more than any year in recent memory, has been about events that aren’t happening in 2023. It was about killing the trends of past, and hypothesizing about what the future will be like, but outside of the rapid rise of LLM’s, very little else affected the present. So with all that being said, what was the tech that made my year? Might be a bit silly to go back to talking about gadgets and apps after all that, but ultimately I believe that this is what everything in tech should be about. Solutions in search of problems never work, and if a new technical development doesn’t improve our lives, the products we use in a meaningful way, it might as well not exist. So here are the things I bought, used, downloaded, and other things I did which meaningfully improved my life in 2023.

Backbone One

When I was a kid, instead of doing homework in my study halls, or paying attention to Spanish classes (Lo siento mucho Sr. Elgin) I kept drawing designs for what i’d want in a portable Xbox. In 2008 or so I just got my Xbox 360 and Halo 3, a game which changed my life and played no small part in directing me to where I work now. At the time the DS / PSP were both the main portable consoles, duking it out for the last remaining market share before the iPhone swallowed them both whole. During this time, I always wondered why Microsoft wouldn’t make a portable Xbox. I wanted to play Halo on the go so terribly, and it just seemed crazy to me that they wouldn’t make the product that would drive my entire middle school crazier than the iPod Touch. Flash forward almost 15 years later, and we have the Backbone One. When I received my Backbone in the mail almost two years ago, I had a crack theory that Backbone wasn’t a real company. The product was so excellently crafted around the iPhone experience, so brilliantly above its competition in the space, that I believed it had to be a shell company made by Apple to test the market. That wasn’t true, eventually they released Android versions and a cool collaboration with PlayStation. However it really unlocked what was at the time, the nicest screen I owned, and the most powerful portable gaming device in my apartment, to play the games I actually enjoyed. I could play shooters such as the ill fated Apex Mobile, dozens of great Apple Arcade games, PS5 games such as Persona via remote play on the bus to work, and of course Xbox Cloud Gaming wherever I wanted. In many ways this device felt like the future, the missing ultra portable gaming system that Nintendo no longer made since the 3DS, and Vita successor I never had. For 100 dollars, this device is a steal, and the only reason after two years I needed to buy another one was because the iPhone 15 switched to a USB-C port. If you have an Xbox Game Pass subscription, or a PS5 hooked up to ethernet, I can’t recommend this thing enough.

Remarkable 2

It’s no secret that I have ADHD. I tweeted about it a bunch back when that was a thing, but i’d like to give a bit of background on it first. I have been diagnosed since almost as long as I can remember. When I entered elementary school / kindergarten, teachers realized that I had a problem paying attention in class compared to the other kids. I wasn’t hyperactive though, but I had an incredibly hard time getting out of my own head for things that didn’t continuously and actively engage me. I was told I had ADD at the time, which has now been reclassified as ADHD-PI. I never took medication for it until high school / college, but still had trouble focusing in class. One idea both I and my teachers had simultaneously was to use my interests to my advantage. I loved computers so much, so maybe I should use them more than other students! Taking notes there, etc. This also helped solve my severe organizational issues concerning physical paper, as organizing digital files came to me so much more naturally than a folder of paper stuffed in my bag. This didn’t really work much until college, where I could use an iPad / Apple Pencil to create handwritten notes. Skipping ahead to my career, note taking is still incredibly important to me in meetings, but I found myself switching programs constantly, and typing never truly helped me remember the subject matter I was being exposed to. At first, the Kindle Scribe caught my eye, I use a Kindle to read a couple books a year, and having a paper tablet I could write on seemed awesome! However, upon receiving it and learning I couldn’t actually write on my books with the pen I became increasingly frustrated by its ultra limited note taking functionality and returned mine. Instead, I purchased a Remarkable 2, with pricier stylus and the official cover. My first thought was Amazon should be embarrassed that they released the Scribe to compete with this. The Remarkable 2 is a masterwork in mindful technology. It feels like paper. It writes like paper. Going back to my iPad Pro and Apple Pencil feels wrong now, like walking on an ice rink with sneakers. The way it writes, reads, how it synchronizes to my devices, works with my cloud documents, everything is almost perfect. I started using hyperpaper, a brilliant PDF planner program to organize my work tasks on my Remarkable 2. I started taking it exclusively into meetings at work. It has changed my life, I think. It’s by far the best digital notebook I have used in a literal lifetime of trying out digital notebooks, and I can’t recommend it enough.

Making My Twitter Account Private

The fire within me that hates Elon Musk will never stop burning for as long as I live. Twitter was so important to me and my life. During the years I wasn’t in the tech industry, Twitter allowed me to reach out digitally to those who were. It broke me out of my lonely room in high school and allowed me to be a part of communities I desperately wanted to belong to. I met some of my best friends through Twitter. I founded a college club on it, co-founded blogs/youtube channels with the people I met on it, played Minecraft with them, made podcasts, etc. It was so important to me. It’s funny because now that I think about it, of the people who surround me in Seattle, I met the majority of them, or at least connected with them, through Twitter. What’s happened to that site has been catalogued enough, so it’s not even worth going through here, but I figured it might make sense for me to write about why I decided to cope with it in this specific way rather than deleting my account or abandoning it like everyone else.

Growing up with ADHD, as I previously mentioned, it also provided me with an outlet to just throw the thoughts I had into a space where people could hear them and respond to them! It allowed me to act a bit more “normal” in conversations, as I was using it as a sort of release valve for my thoughts - a way for me to “let out” the tangents I wanted to go on and engage that side of my brain which craved that type of conversation above all else. I found that while the website has become so much worse over the past year, it’s been hard for me to leave precisely because it provides that function for me still. This felt demoralizing, as none of the alternatives to it have truly panned out in the way I wanted them to yet, so I still needed that “space” to write my thoughts (at least, the short form ones). However, the new denizens of that platform - the holocaust deniers, the right-wing shitheads, the LessWrong users - aren’t people I wanted to find me. So I did one thing I never thought I would do when I created my account in late 2009: I made it private. I still need a place to post some thoughts and stay connected with whoever I know hasn’t left the platform yet, but I realized I do not want anyone else to discover me. I don’t want these assholes to see me, interact with me, read my posts, etc. Every once in a while, I feel like I am missing out not replying to a post in public but, honestly, I often find myself forgetting I am private now. It’s a much better experience than being public, at least on this new transformed platform I find myself stuck in. If you find yourself in the same position as I am, and you’re not required to be on there for work like many in the gaming/athletics spaces, I highly recommend you do the same. It’s the best way to wean yourself off the platform, and my screen time has decreased from hours a day in late 2022 to sheer minutes. One day, I hope the experience of old Twitter will be reborn again elsewhere, but until then, making my account private has been the best decision I’ve made since the Twitter I loved ceased to exist.

Arc (again)

More than a year into using Arc as my default browser I have found it unbelievably hard to go back to anything else. This is not because of lock in, or syncing, or Chromium over WebKit (I actually wish it was WebKit!) but because it has managed to provide a mental organizational framework for me to browse the web which removes the overhead of tab management entirely. I keep playing with spaces, folders, favorites, pins, and various organizational systems. We’re hearing murmurs that soon the long awaited Windows version is on the way, and I can’t wait to switch away from Edge fast enough. Same with Safari on my iPhone / iPad, whenever their redesigned mobile app gets built. They’re doing truly interesting things over there, and are to me one of the most exciting companies still building in the space.

The PlayStation 5

The PlayStation 5 might be my favorite gaming console ever made. It’s wild to say this, as it has been releasing essentially enhanced PS4 games since the generation started, with the first system seller exclusive title being something I complained about a month ago, so I might have to edit my statement a bit. Playing games on my PS5 is the best experience I’ve had playing video games in recent memory. After the commercial struggles the PS3 underwent, Sony decided to market the PS4 as a pure gaming machine. The changes in the UX reflect this, gone was the XMB ui of the PS3 that sold it as a multimedia HTPC powerhouse, and in its place was a list of games in the chronological order you played them. That was it. The entire UX is a list of games in chronological order. This UI was so simple and brilliant that Nintendo copied it for the switch.

The PS3’s XMB interface, showing a list of different types of media

The PS4 home screen showing a list of games. I grabbed this off Google Images, please HBomberguy do not drag me, I pay for your Patreon

Despite this I still didn’t love the way the PS4 worked. For one, it was a single interface for everything. The Xbox / Steam had this concept of the “guide”, an interface which floated over your game and allowed you to access messages, join voice chats, see friends lists, and send invites. The PS4 slapped all this in the same home screen again, later introducing a guide which I found to personally be subpar. Personally, I never played multiplayer games over PSN for this reason alone, despite having many fun memories with Titanfall 1 and Halo 5 on my Xbox One.

The PS5 has the best UI I have seen in a gaming console. Sure it has its issues here and there, but it combines the best of what worked from Xbox (the guide), the best of the PS4 (chronological list of games, beautiful animations and reliability), improves upon those things by splitting the media apps such as Spotify / Netflix out into their own little row, and adds genuinely useful features such as “game hints” which act as a integrated players guide, providing you hints on what to do next based on where you are in a game, showing you videos on the console and hiding spoilers of sections you haven’t played! I haven’t even mentioned the controller yet, the triggers giving you force feedback actually provides a genuinely new and improved experience vs older controllers and my computer. It actually makes me want to play games on my PS5 instead of my PC! A pretty alright tradeoff, considering my PC can barely run 120hz games on a RTX 2070 Super on an ultra wide screen, where my PS5 can push them in 120hz no problem. I hope Xbox takes some lessons from this in the future, as this has moved me further away from the PC ecosystem into something that is beautiful and simple. It just works, and most of the time, it works better!

ChatGPT for iPhone

Last but not least we have ChatGPT for iPhone. Earlier this year I wrote about LLM’s while sleep deprived on a 6 hour flight with an unhealthy amount of caffeine in my system. I threw out a lot of ideas in that post, almost none of them concise, but I think a couple stuck to the wall. The idea I was the most confident in was that this was all too expensive for most people, and it turns out, it was even too expensive for me! Paying for ChatGPT Plus, Perplexity, and Raycast Pro was stupid, so I decided to consolidate towards one subscription. That subscription turned out to be ChatGPT Plus, which I found provided the most utility for my monthly dollar. Is it worth 20 dollars a month? I can’t really decide that for you. However for me I have found it is. It’s reduced my dependence on Google a ton, something Kagi Search or DuckDuckGocouldn’t in the time I spend experimenting with them. I spent the year mostly giving OpenAI 20 dollars for a month, realizing that simply GPT4 alone was not worth that money, and cancelling days later. What changed my mind was a few announcements, all of which culminated around OpenAI Dev Day. First was voice chat, which would work with the new ChatGPT iPhone app they released in the summer. Second was web browsing, a feature which significantly reduces hallucinations (not eliminating them) by grounding it with web pages and Bing searches. Third was the multimodal chat, before you needed to select what version of GPT4 you wanted (Regular, browsing, data analysis, image generation, etc) in a really bad dropdown menu, and voice couldn’t work with any of them, but around Dev Day they fixed this and rolled it out to all GPT Plus users, and a day or two later the iPhone app. This meant I could use the voice chat and get actual information from the internet for which Siri was unreliable, such as whether a restaurant was open today, what happened with a certain news story, or whether this one weird behavior my cat was exhibiting meant I needed to take him to the vet (the answer was always no). The final thing which pushed me over the edge was… the Action Button on the iPhone 15 Pro. Turns out they replaced the sleep/wake switch on the iPhone with a button that you could set to do essentially anything you want, and many people have used it to start a ChatGPT voice chat! I’m one of those people. turns out having a digital assistant that’s actually capable of looking at the internet and answering questions when you hold a button on your iPhone is a useful feature! Who would have thought.

2023 was an odd year. I’m both hopeful and nervous about the future of our industry more than ever. There’s so much I didn’t cover here in this post, and probably won’t because I just finished my coffee and want to leave the coffee shop to check on my cat, finish my laundry, and play Persona. Ultimately, though, I think it might seem like a slow year if you are a product nerd like me, but with hindsight, I believe this year has the potential to be one of the most transformative years in tech since 2008. We just won’t really know if that happens, or even where that change truly is coming from, without the benefit of hindsight. For now, all I can really do is be amazed by the pace of development in this area that felt like science fiction as recently as last summer and push the changes as best I can in the direction I believe would be the most helpful to everyone. Or I could just play more Call of Duty on my phone. I’ll probably just do that instead.

Some thoughts on AI wearables

For the past couple of months, we’ve started to see a new wave of AI hardware products released into the world. Meta’s new smart glasses, the humane AI pin, and better writers than I have already commented on these things, but as always, I have thoughts.

First, I believe that AI wearables have an actual market. I think both the Meta / Humane products are trending towards this idea of an invisible computer that eliminates the need for a phone in your pocket all the time, and this is something I’d personally really enjoy. I find myself really wanting both of these products, although I will probably only purchase the Meta glasses, if only just to get a pair of transitional lenses with Bluetooth headphones built in. Although the Humane product is closer to what I’d want in terms of functionality, it’s explicitly designed as a complete phone replacement. It uses a separate cellular subscription without number sharing like the Palm phone / Apple Watch, and has no phone app at all to speak of. I can see myself wanting a version of the AI pin maybe half a decade down the line when they eventually figure out that trying to compete against the iPhone is a bad idea. Either that, or the whole thing goes under. I do think Humane might be onto something in terms of dedicated form factor for maybe, 20 years down the line, but this looks like a classic case of thinking the consumer base is more ready for this than they actually are. You need to let them get their feet wet with this idea first, as Meta is, without forcing them to take the plunge. Otherwise, change is scary and the iPhone has Marvel Snap on it, so who are you really gonna convince here.

There’s also the rumors of the OpenAI / Ive device, which right now seems to be a bit of a bust. OpenAI (disclosure: my employer Microsoft has invested in / works with OpenAI, these views do not represent my employer or OpenAI, etc.) has been making the brilliant pivot to a consumer company, and as Casey Newton brilliantly mentioned on this week’s episode of Hard Fork, is turning into a generationally important company. The next Facebook / Google-sized company is sort of forming right in front of us, and it’s incredible to witness as a conscious adult who is now a tech worker. Sam Altman has commented on these rumors saying they just don’t have a good idea for one yet, which doesn’t give me much faith any of these companies do either. Right now, the Meta glasses / Humane AI Pin both seem to be existing hardware ideas that had AI forced into their marketing pitch within the past year. The predecessor to the Meta glasses, the RayBan’s stories, had no such integration, and the Humane AI Pin was pitched as essentially a wearable camera in leaked decks going back to 2020. The AI boom has had the teams of both companies utilizing AI, regardless of how it enhances the product experience, as a way to generate excitement both internally and externally for these new devices rather than it being something consumers truly want.

For the past few months, I have been paying for ChatGPT Plus. I found that it, along with Raycast AI, are the two LLM services that have stuck with me the most. Raycast’s AI features are just convenient as all hell, and they’re exactly where I need them when working, whether it’s to recall a bash command or to formalize my emails. ChatGPT Plus, meanwhile, has turned into a Swiss army knife of sorts, a really good value for the 20 dollars given that it has advanced data analysis (writes python code for you to crunch stats from spreadsheets), browsing (Bing chat but way nicer), a very nice iOS app, an AI voice assistant on mobile that sounds a lot like Scarlett Johansson in Her, unlimited uses of DALL-E 3 for AI image generation, and unlimited GPT-4 uses. The new GPT’s feature is a neat little preview into where they want the future to head, but I haven’t found it to be that genuinely useful yet in my experiments, other than as a way to save and share custom prompts. I’ll hold my judgment until I see what others can do with the platform though. Regardless, I found that the voice model has been really useful, especially with the added browsing. It’s become the thing I use my action button on my iPhone for the most, as unlike Siri, it will actually respond to my information queries with voice rather than giving me a list of search results and telling me to figure it out myself.

In summary, I think AI works alright on my iPhone and my Mac! So why are a bunch of companies working to build these products into wearable devices? First off, I think the appeal partly lies in its speed. Second, it’s Apple. By launching “app stores”, building their own hardware that works independently of the iPhone, these companies are attempting to emulate the success of the iPhone for what they believe to be a new computing paradigm. I don’t think they’re completely wrong for this, but I want to stress something to every single person working on AI products and in the field.

ChatGPT is a 100% free iPhone app.

GPT-4 wrappers are considered by many to be a derogatory term, but I believe it’s an accurate one. So many companies believe they can essentially white label OpenAI with a software app or tool, and ask users to pay even more than ChatGPT Plus for it. You can’t, because ChatGPT is a 100% free iPhone app. Your AI tool in your notebook app you make people pay for forgets this. Your online SQL query writer forgets this. Moreover, AI wearables that compete in pricing with the iPhone seem to overlook the fact that ChatGPT is a 100% free iPhone app. White labeling doesn’t stop being white labeling because it’s a pin on your chest. You can’t compete with a free app that exists on the device people already have and love, unless you increase the convenience factor by an order of magnitude.

All this to say, I think eventually these products have a market someday. I couldn’t even begin to guess when that day will be though. Could be as soon as next year, or as far as 20 years from now. All I can say for sure, is that it’s not now.

Spider-Man 2 is fine

I did it again; it’s been a few months since my last post. I promise there’s a reason for this, and his photo is at the end of this post. Also, this post will be spoiling Spider-Man 2 on PS5, so do not read this unless you’re into that.

Anyways, I have been playing Spider-Man 2 on my PS5, and I just finished the main story, so I figured it’s time to share my thoughts. I think this is a fantastic tech demo for the PS5, but to me it lacked the narrative cohesion and dedication to the character of the previous entries in the series. I also found myself enjoying it quite a bit less than the previous Spider-Man games, mostly due to the changes in the combat that de emphasize Spidey’s acrobatics for insane lightning / robot powers. This is not to suggest that the combat in the previous games was necessarily complex, but the original PS4 game, in particular, focused on immersing you in the experience of being Spider-Man while fighting enemies. This sequel to its credit does try to change things up. You have Doc-Ock’s arms as Peter, later the symbiote, and you use them in combat to stun people with electricity, throw them up in the air, and dash forward. Unfortunately, this just made me feel like I was playing a generic spectacle fighter rather than a Spider-Man game. Peter used gadgets in the original, but here he feels more like Iron Man, which clashes with the image of a guy who we were introduced to with pizza boxes on the floor of his apartment. This change in his combat style also makes him play a lot closer to how Miles does, which made the combat feel stale even though you switch between the two of them. About 80% of the way through the game, there’s a 15-minute section where you get to play as Venom, and it’s the absolute highlight of the game because it’s the first time I felt like I was playing something new and different. Tearing through a building with this tank-like character was a bunch of fun, and it kinda made me wish we got a Venom game instead.

The story is also pretty average. The first game had some decent writing; Insomniac was obviously trying to prove their love and respect for the character and wrote a fun and unique story that balanced fan expectations. As someone who hasn’t read a ton of comics, it was cool to see an older portrayal of Peter and a more human side to the antagonists associated with him. Mister Negative/Doc-Ock were great, well-characterized, and interesting antagonists for Peter. Miles Morales’s solo story had its moments but was overall pretty flat and uninteresting because they didn’t have much to draw from in terms of antagonists, but it was short enough, and the gameplay was fun, so I powered through it. Spider-Man 2 decided to make the main antagonist Kraven, who just sends a bunch of Bass Pro Shop wannabes running through Manhattan in their Humvees and annoying dogs. You fight these guys for most of the game until Venom suddenly enters the plot 12 hours in, shows up, and kills Kraven. Kraven felt like a character that existed as a placeholder to explain why Spider-Man has to fight an army. His entire motivation is that he has a deadly disease and wants a powerful enemy to kill him. This isn’t compelling or interesting because it has no relation to what either protagonist is doing. Peter is trying to save Harry, a guy we’re introduced to for about 30 minutes as part of a stealth tutorial, and Miles is trying to write a college essay and has some conflict with feeling useless or something, and this comes out of absolutely nowhere. He also wants to kill the antagonist from the first game, out of absolutely nowhere (he had his own game where this wasn’t mentioned once) until he doesn’t. It’s not compelling to me. To the game’s credit, the story significantly improved after Venom entered the narrative, but by that point there were 2/3 hours left. Anyways, it’s time to take this post off the rails a little.

One thing I often find really frustrating about people reviewing video games is how they still aren’t treated as forms of artistic expression. When someone reviews a film like “Killers of the Flower Moon,” they judge its acting, visuals, writing, and how it affected them. The creative decisions of the work itself are factored into play here. Anytime the technical aspects of a film are exceptionally good, it’s more of a bonus than something that actually makes the film exceptional. On the contrary, when a film’s technical aspects are bad, it becomes a laughing stock. The technical aspect of it being a competently made film is expected. Meanwhile, in games, we constantly praise bug-free games that look pretty, run well, have smooth anti-aliasing, big worlds, pretty reflections, etc. Games are judged as products rather than pieces of art or expression. Does the game have enough content to justify the price? How saturated is the world with things to do? Very little focus is put into why this game even needs to exist, what it accomplishes, etc. This is not to suggest that a game having almost no content does not deserve criticism, but judging the financial value of games as products is often the focus of major critical outlets. This is not how films work. If they did, James Cameron’s “Avatar” would be up there with “The Godfather Part II,” but I doubt you can even remember the name of more than one character from it. This is frustrating to me, especially because games have incredible potential as tools of human expression. They reverse the mental model of a person watching a film or reading a book, forcing the player to work as part of the text rather than looking from the outside in. This is incredibly powerful, and it’s frustrating whenever a game seems to be doing its hardest to simply emulate other mediums.

Spider-Man 2 is an incredibly well-made video game. It has a big, beautiful world, it is a technical achievement for its platform, and it proves the competence of its developers. Ultimately, though, I do not think that matters anymore. Games as an artistic medium, especially AAA games, have reached a level of maturity where this should be expected rather than praised, and critical analysis should focus more on its status as a piece of expression and how the player interacts with it. Does Spider-Man 2 have a justified reason to exist other than financial? Not really. It’s sort of the same thing again, sometimes better, and sometimes far worse. Maybe it doesn’t need to in order to achieve its goals, and perhaps the reason that the critical analysis of games still treats them like products is because that’s the way consumers treat them. I think big budget cinematic games are valid and have their place in the industry. Two of Sony’s efforts in building blockbuster single player narrative focused games, God of War (2018) and The Last of Us Part II are among some of my favorite games ever made. Maybe I shouldn’t be comparing a second Spider-Man game to those, but when it’s all Sony has released this year I feel like it deserves that spotlight / scrutiny. I thought the game was just OK, and I’ll buy Spider-Man 3 anyways when that releases i’m sure, but it’s frustrating to see these games drift even further from what made me enjoy the first.

Here’s my cat

Why I Game On My PC Less (I'm Old Now)

I’ve been working on a much larger post for the past few months that’s an in-depth analysis of a game I have been playing, but I had some thoughts on another subject meanwhile that I figured I’d jot down in between builds.

Recently, I have fallen off PC gaming. I started my life playing games on computers as I didn’t have access to a gaming console like my friends, and it became a core part of how I interfaced with the medium until I was almost a teenager. For me, the cost benefit analysis of PC gaming just started to make less sense as I entered a post college stage in my life where I had less time to fiddle with and tweak things on my computer. First off, I prefer my Mac as my dominant computing interface, which means I need to have a separate gaming computer if I want to play basically anything. Second, I prefer playing most single player games on my 4K OLED TV, sitting on my couch with a controller in hand. Third, I have less tolerance for things that just don’t work regarding my entertainment. When I am finished at the end of the workday, I want to click a button and have the thing I intend to enjoy start with no issues. Any friction here is intolerable for me! I have very little time between work, socializing, going to the gym, cleaning my apartment, shopping for / preparing food, and the days I am traveling to / from the office. I have many thoughts about the subject, as usual, so I figured I’d jot them down below.

Something I have internalized regarding technology is that you spend either your time or your money on things, and all products you can buy exist somewhere on that spectrum. For PC gaming, you spend a lot more money and a bit more time to ensure your games work and play properly, but the advantage is that they run “better” (nicer visuals, higher frame rates, better resolutions) with more configuration options than anywhere else. When I was younger, I had a lot of time and very little money. I asked my parents to let me build a gaming PC and hoarded up steam gift cards for sales to buy games there. That leisure time was my currency, and in exchange of fixing the multitude of issues present, I could use it to get what was a fundamentally better playing experience than what was available on a console. Obviously, that calculus has changed for me, but so has the technology surrounding it.

The image quality gap between consoles and PCs is a lot smaller than it used to be, but the financial gap remains the same. This is due to technology which allows less powerful hardware to upscale the image displayed by essentially guessing the missing pixels, which it’s able to do with incredible accuracy. This allows my PS5 to upscale Final Fantasy 16 to a near 4K image, and will likely allow the successor to the Nintendo Switch to do the same when it is docked to a TV. This makes the game render at only 30 frames per second on a good day, but most games this generation allow me to choose to prioritize frame rate instead, which was a major reason I preferred a PC most of the time. My go-to shooter of the year, Call of Duty: Modern Warfare II (not to be confused with Call of Duty: Modern Warfare 2) allows me to even play the game at 120 fps on my television! This eliminated a major reason I preferred my gaming experience on PC.

Steam has also been a factor in this decision. Steam was a revolutionary platform in gaming. It was the first one to deliver full AAA games over the internet, it allowed gaming PCs to have the ecosystem and platform benefits of their console benefits such as achievements, friends lists, and more. It completely reshaped the industry to the point where both major consoles have options without a disc drive and no longer allow physical only games to be sold. Over the past decade, though, the platform has languished in relation to its console counterparts. Steam, while being a good platform in many respects, seems to go off on tangents it never really fulfills. Steam Machines, the Steam Controller, Steam Link / Streaming, and Steam VR all languished once they were met with anything less than critical acclaim and massive financial success. Their biggest foray into expanding the ecosystem, the Steam Deck, competes with the Switch well in every regard, withholding its portability and battery life, which are the parts of the Switch I value most.

These days, Steam seems to only improve the product in direct response to complete financial pressure from their competitors, rather than pushing boundaries. They only deliver the bare minimum product to ensure that users don’t leave the platform (which, users can’t anyway, as Steam owns your games). Fun fact, I tried setting up a second account on my Windows computer for my dad the other day so he can get work done while he is visiting. When I tried to open Steam on it, it opened my steam account on his brand new Windows account without any form of authentication on my end. Seems like a pretty big account security issue if you ask me! Turns out this has been broken for over a year. Guess fixing it won’t help sell more Dota skins, ya know. In addition, Steam’s remote play equivalent pales in comparison to my PS5, and their cloud offering against Xbox is non-existent. There’s no way I can play my games on a phone without coughing up money to Nvidia in exchange for access to 15% of my library. Their sales are still fine, but Xbox and Playstation have mostly caught up there too. Every time I open these consoles I see sales for upwards of 80% off games, and it’s been cheap for me to build my library of digital (and older physical!) games up for these systems.

Cost and accessibility is another major factor. Three years ago to the month I moved to Seattle, a city I have loved since I was a teenager and a place I am proud to call my home. My parents live across the country on the east coast in various different states, and I am obligated to visit them often, so many of my vacations involve spending time with them. I enjoy playing games regardless, especially on vacation, and with my newfound satisfaction for the PS5 I realized I can just buy the digital consoles for both places for less than the price of a single new gaming PC. This alone shook me, I never thought about how cheap these consoles are with relation to a PC, but unlike PC hardware they’re sold at a loss as software / services revenue generate most of the profit. It’s hard to justify buying or building a gaming PC when you can buy a PS5 for a third of the price, and it is just going to run games better on a 4k TV unless you spend thousands of dollars. The “cheaper” calculus that drove my decision to move to PC gaming as a teenager is no longer relevant here, either.

This is not to say I don’t ever play games on a PC or use it! I love my PC, and I enjoy playing certain games there. I play multiplayer coop games with some friends from college, get really frustrated at League of Legends because I refuse to absorb how that game functions or works, languish in high silver Overwatch, and of course play Minecraft there. However, PC used to be my default in a way it no longer is. I would buy major releases on PC, I would play new games on PC first, and my console would exist only for exclusive games. Ironically, real console exclusivity seems to be a thing of the past outside of Nintendo. Xbox releases games simultaneously on PC, even on the Steam store, and PlayStation usually launches games on PC within 2 years of their release on the Console. It’s never been a better time to play PC games! However ultimately, I realized its just not for me, and that’s ok! I think the thing I am trying to get across with this post though, is I don’t think its for the overwhelming majority of people. If you want to play a PC only game like Valorant, just get a mid range Windows laptop (and never a “gaming laptop”) and just buy a console instead. I realized I don’t need a gaming PC, and I don’t think you really do either. They’re not really good deals anymore, or a “master race” (maybe think more about the connotations of that term before using it you weird ass redditors) but just another way to play. I can buy three PS5’s for the price of this years second most expensive Nvidia GPU, and its time the industry changed their calculus too.

Back to Web 1

I am fully convinced that the total breakdown of centralized social networking websites at every level is going to push us back to an online social architecture similar to the one which existed in the late 90’s / early to mid 2000’s. The failure of Twitter, Reddit, (looking further back) Facebook, and probably within the next few years, Discord, will all accelerate movement off centralized networking platforms into smaller more intimate ones. Reddit’s complete and total failure to monetize its users in a meaningful way turned into a complete disaster when its CEO had a temper tantrum online for the world to see because other developers made a product worth paying for off his platform. This in turn has made thousands of subreddits lock themselves as private, ensuring they can’t be discovered by anyone outside their community. I would be shocked if this sort of major user revolt didn’t happen to another major platform before end of year.

My theory: Web 2 was seen as the birthplace of the centralized network, and the beginning of the internet as a place synonymous with modern life. People moved from myspace to facebook, from forums to subreddits, from blogs to twitter. I think that what is going to happen is that we’re going to see movement backwards. Forums become smaller somewhat interconnected spaces, Meta takes over what Twitter used to have, Reddit as a whole either undoes their recent changes or gets destroyed by the most anal people on the internet who they’ve rightfully managed to piss off. This anger is going to cause many communities to move back to a sort of middle ground between Web 1 and Web 2, where there’s large networks and small platforms.

My take: This is bad. At least for quite a while. Instead of Twitter and 8chan, you now have dozens of unregulated smaller spaces without fact checking, misinformation reporting, centralized moderation or CSAM scanning. This is going to make the internet worse for the vast majority of people who do not want to put in the work to deal with these things. The view that this migration off centralized platforms to decentralized ones is a net benefit for all is rooted in a crypto libertarianism that stems from free market capitalism. This isn’t how it works, any community strong enough to sustain itself (and with new protocols to seed content across platforms this can be any community) has the ability to self regulate and exist with a small congregation in perpetuity. This will lead to a much larger conspiracy / misinformation problem than we ever had on Web 2 media, especially with the automated ML powered discovery engines which spread those conspiracies in the first place still existing on these smaller networks. Makeshift agreements such as Mastodon’s covenant are great until these smaller networks which were forsaken for ignoring it continue to federate amongst themselves unchecked and spread whatever the hell they want.

Reddit has made me realize that what happened to Twitter will happen to your favorite website / platform, not because of some shadowy billionaire who wants to get back with his wife, but because the pressures of modern capitalism are finally demanding it to happen. The SVB closure has prompted companies dependent on fundraising in the past to button up the hatches and try to find the quarters buried in their couch. It’s going to happen to the place you love next, and whatever replacement you attempt to scramble to just will not be as nice. Get ready.

Observations (sleep deprived ramblings) on the first wave of LLM based products

For the past few months I have spent a lot of time using the first wave of LLM products from OpenAI, Microsoft (disclaimer: is my employer, who my views here do not represent), and Google. My motivation for this twofold, first, that these are cool as hell. Ever since Alexa / the Google assistant, the dream of the Star Trek computer seemed in reach, and it’s obvious that LLM’s are the next step in the path towards being able to query machines using natural human language. Second, I have ADHD so the more digestible information is and the closer in temporal proximity it is to my desire to learn it, the easier it is for me to absorb. The rest I don’t have to justify as much, but I just wanted to jot some observations I have had down and see what the feedback from everyone is.

Additionally: I am writing this on an 11 inch iPad Pro, on airplane wifi, with 3.5 hours of sleep, a grande size Americano from the Starbucks in Newark Terminal B (the shittiest airport terminal in existence now that they renovated Terminal A), and a dream. So, might repeat myself. Might forget how to write a sentence. Who knows.

The closest UX to the OS wins

In the past 8 months I have used (in no particular order) OpenAI Playground, ChatGPT 3.5 Turbo, Google Bard, Bing Chat on the Web, Bing Chat on iPhone, Generative AI on Google Search, Raycast AI, Short Circuit, Petey for iPhone / Apple Watch, ChatGPT plugin for Raycast, Discord Clyde, Notion AI, ChatGPT for iPhone, etc. The one that was the stickiest and that I see myself using on a regular basis is… Raycast AI. Might seem weird, after all every app besides the ones with Google in the name is actually OpenAI under the hood. Some are even free, and Raycast is making me pay 10 bucks a month, so what’s the deal here? Well I noticed a few trends in my habits. First, I found myself rarely using these products if they involved more effort than a Google search. Having to open a browser, than a website, then type a query was too much effort to get an unreliable answer, especially when a normal search often surfaced what I was looking for. With Raycast, I don’t even have to deal with that first step! I just hit cmd + tab and type, then tab, and the query is on my screen + in a chat window. The convenience of this is worth the tradeoff of the model being restricted to GPT 3.5 Turbo and with no search functionality 99% of the time, and the times its not, I prefer to search anyways. This made me realize that the race for the dominant model + system will depend on who can get theirs bundled with the iPhone first.

Windows Copilot was revealed at Microsoft Build this week (disclaimer part 2: so was the product I work on, Microsoft Fabric, check it out it’s very cool) and it’s obvious that this sort of OS level integration is the next frontier for these models after they’ve been integrated into search engines. Android will likely get some form of integration with a LamDA / PALM based model sometime in the next year or two, same with Chrome / ChromeOS, so the question is what Apple is going to do to keep up. Siri has for years lagged behind its competition, and I think that a future in which Android has the equivalent of Windows Copilot and Siri can’t even surface basic questions on my HomePod is going to be a genuine risk for them. If I had to place a bet, the obvious / safe one is that Apple is going to wait until they can feasibly do this on device, either by training their own LLM (unlikely) or partnering with / utilizing an open source project as the foundation for it.

Nothing is free (except some things are)

The other thing I noticed is that every single one of the above products not from Google, OpenAI, or Microsoft, cost money. This is because OpenAI’s API costs money. Every query you send to Raycast AI, they need to pay for, and since they can’t predict how many queries you’ll send they need to charge subscription revenue up front. Some products like Short Circuit and Petey, both of which were ChatGPT wrappers I used before the official iPhone app was released last week, allow you to just give them your OpenAI API key so you can pay for what you actually use. This type of payment was the only way I could use multiple of these products at the same time, and it made me realize that the cost of API requests is a problem for centralized LLM platforms. A Google employee’s fascinating memo titled We Have No Moat reframed the entire way I thought of this new competitive space forming. My takeaway from reading the memo was that open source on device models are going to catch up pretty soon, and if we’re truly going to plug this tech into many different products, I expect that you’re going to see future implementations rely on local / FOSS models a lot more as a way to get around subscription bloat. You can’t compete with free, unless you’re Adobe, in which yeah, GIMP sort of sucks and i’m willing to sell my soul to never use it. The point is, people aren’t willing to pay 5 bucks a month for you to meet them where they are for every product on the planet. It’s either gotta be free or included in the price of the product.

They sort of suck!

I don’t think there’s getting around how much relying on these things as your primary source of information feels like shit. Not because of any ethical concerns, but because the way in which the information is presented is so sterile and generic that I start to go a little crazy after reading too much of them. It’s the mental equivalent of only drinking Soylent forever. After enough time, your body starts to revolt and ask for real food. I also started to feel like I was going mental after a while, as while they don’t hallucinate often, they do enough to where you really can’t trust them for anything serious. I use them for grabbing bash commands I forgot, which was a task I used to use Stack Overflow / Google for. The nice thing about the later is you could usually use intuition to tell when something you saw was wrong, based on either the age of the information, the amount of upvotes, etc. Here it’s a total crapshoot and it makes you feel like a paranoid nutcase when something breaks. If you are asking a LLM for information on a subject you are unfamiliar with you are flying blind. Products with search like ChatGPT Plus don’t even fix this problem, as they can hallucinate just as easily.

My crystal ball predictions

Alright so it’s time to make my predictions. My predictions / takes will be a simple bulleted list so I do not have to remember how the english language works anymore because the caffeine is wearing off and i’ll be honest it’s starting to slip from me.

Predictions:

Okay that’s enough thinking for the day.

Bird Down

Disclaimer: I work for Microsoft on Azure. Nothing I state here reflects any views my employer holds. These opinions are mine and mine alone, unless they’re also yours, in which case, hell yeah. Additionally, I am not an author. Grammar is going to be all over the place. Thanks.

Over the past few months my favorite place on the internet has been dying. You know why. An insurmountable amount of horrific decisions were made by its new owner who essentially determined himself the primary class of user to be catered to. He decided to crusade against bots on the site, an occurrence less frequent and annoying to me than the spam I receive in my email inbox. He allowed everyone to purchase a blue verification badge for 8 dollars without actually verifying their real identity (the previous point of the badge), devaluing the entire concept of the badge and ensuring that Tweets actually convey far less information than they ever used to. Last week he ultimately removed them from everyone who was “legacy verified” before giving them all his new blue service for free, a thing he promised he would never do. This has turned the badge into a mark of shame, something that all the previous verified users such as Stephen King are ensuring their followers they did not pay for of their own free will, and has resulted in Twitter no longer being a place that makes news and moves markets.

Imagine being so absolutely horrific at marketing and branding, that not only does no one want your product, you have managed to make every fucking influencer on the planet actively state they would never pay for it in the first place and do everything in their power to remove it. If that’s not enough, the new users he has “verified” for paying 8 bucks a month are generally produce the worst drivel i’ve seen in my almost 14 years using this website, and you have to now wade through them at the top of all replies like you’re traversing marshland in order to see anything worth actually reading. They constantly obsess over views and metrics, complaining when their paid for algorithmic boosting doesn’t compensate enough for their sheer inability to post. Those are just the things as a direct result of his product decisions too! This doesn’t take into account the horrific security concerns that have arisen since he found himself in charge, including giving journalists behind the scenes “god mode” screenshots and accidentally allowing private “Circle” tweets to show up in algorithmic feeds for all users, a bafflingly massive security violation.

Outside of Twitter, his leadership of Tesla in recent years has been a mixed bag at best, unable to deliver on product categories like the pickup truck in a timely manner due to absolutely baffling design decisions, resulting in them being eclipsed by Rivian / Ford in terms of both quality and sales. His leadership of SpaceX isn’t even worth commenting on because effectively Shotwell actually runs that shop. Even if you see his stewardship of the former corporations favorably, by comparison his time at Twitter has been nothing short of amateur. You don’t even have to take it from me, the venture capitalists and conservatives who supported him initially are dunking on him in the replies after he banned them for zero good reason.

The only reason the site still draws a pulse is that there hasn’t been a suitable alternative for an exodus yet. Mastodon is a cool proof of concept but ironically is far too centralized around the concept of the instance, with every decision of your experience dictated by the administrator of said instance. It’s Twitter if it was hosted in the panopticon. BlueSky is a far more promising alternative in terms of functionality (quote posts! recommendations!) but has a long way to go in terms of UX polish, isn’t open to the public yet, currently lacks the functionality to block someone, and it’s up in the air whether the main instance will even suspend the accounts of Nazis. If I had to put my money down on a winner amidst all this, I think that Meta/Facebook’s P92 is the closest bet. They’re hovering over the Twitter bird’s dying carcass like a vulture waiting to feast. They have the motivation too, their failed short term metaverse pivot fundamentally requires them to own text communication on the internet and appease investors in the short term with growth outside of that space. To compete against Twitter their product will integrate with Instagram, likely allowing celebrities to bring over their followings and verification, and newcomers like Mastodon as it’ll be decentralized and support activitypub. The later concession alone is enough to show me that they understand how to win in this new void Elon has built.

Four years ago, Twitter was one of the most influential websites in the world. The 45th president of the united states was an absolute addict. It drove almost every news cycle. After years of trying, Facebook couldn’t compete or keep up, outside the real time celebrity tabloid that is Instagram. Now, suddenly, out of nowhere, hundreds internally are working to build P92, a product which did not exist a year ago. It’s because Elon is a weak leader with zero understanding of consumer business strategy who is incapable of building products with user needs that don’t align with his own, and everyone with half a brain in silicon valley understands that there is money to be made off his narcissistic ignorance. His leadership has effectively caused a 9% drop in daily active users of his platform year over year, caused a total advertiser meltdown, and have caused famous celebrities to announce they are not paying for his subscription service which he has now turned into a mark of shame. He’s so scared of competition he banned even talking about them, before being scared he is going to lose his users he reversed course on that policy within days. Regardless of users he’s also continuing to lose advertisers, and most importantly he is losing his sense of product direction, ruining the entire appeal of the network to suit his whims. Twitter will “survive” much in the same way every dead network never stops working, like Tumblr, LiveJournal, and MySpace, before it, it’ll die a slow death, fading out of relevance until its user acquisition stream ultimately goes bust. It will never stop working entirely, but stop being useful. To me, it’s already almost there. It’s less a question of whether it’s happening, and more about who will replace it. On that topic, we’ll just have to wait and see which vulture swoops down first.

The iPad Pro and the Studio Display

Last year I decided to splurge on a bunch of technology that I figured would improve my work life. As someone who predominantly works from my home office, I am of the belief that an expensive purchase that greatly impacts my enjoyment of the way I spend many of my waking hours was worth it. I purchased two Studio Displays from Apple, and while I have my issues with them, they do exactly what I wanted them to do and I adore the sheer fidelity / color accuracy of the screens. I use them with my 16 inch MacBook Pro, with one plugged into each Thunderbolt 4 port and the actual laptop in a closed position.

Another device I purchased last year is a new 11 inch iPad Pro with a M2 SOC. Originally I was let down by the announcement of the M2 model, as my dominant use case of the iPad (outside of being an expensive chromebook / travel computer) is watching videos, so I was holding out for it to get the MicroLED display of its larger counterpart. However my old iPad Pro was the original “all screen” 11 inch from 2018, and it was starting to show its age in battery life. Additionally I was waiting for years to get a version with cellular compatibility, so I purchased it regardless. The one thing I wanted to try with this though, was Stage Manager.

Stage Manager is on the iPad Pro 2018 as well, but I never liked it much there. It felt laggy, slow, and on the small 11 inch screen its often more of a pain to deal with to get what you want out of it versus the traditional split view multitasking. The new iPad Pro added a few additional features though, such as the Thunderbolt port and multi monitor support, which I felt would be perfect for the device. I figured that an iPad Pro, combined with my existing mac setup, would be a fantastic combination, especially when I am visiting my parents. So i’ve been attempting to use the iPad with my Studio Display for casual use, and there’s a few problems. Figured i’d post them here so they can be emailed around Apple HQ and have a few tickets opened. I know this is the most first world of first world rants, but this is the stuff I love to talk / think about so please excuse me but here goes

1. The iPad does not support external cameras, at all

If you plug an iPad into a Studio Display, you are stuck using the iPad Pro’s integrated camera for all purposes. This includes FaceTime, Zoom, Teams, and any other video conferencing app you can think of. There is no way to use any other camera besides what exists on the physical hardware of the device. This is wild because when you plug the iPad in to the Studio Display, it automatically switches to its speakers and microphone! This is a somewhat baffling decision, a huge oversight, and something I hope is fixed in the next version of iPadOS or a future software update of the studio display. Oh right, the display gets updates, we’ll talk about that in a bit.

2. The iPad is always the primary monitor

In modern computer operating systems there is the concept of the “primary” monitor, which is where your taskbar lives and is usually your main point of focus when interacting with your computer. On iPadOS, the primary monitor is always the display of the iPad. It is where your apps live, your widgets, etc. The monitor is required to be on, and if you have the (fantastic, product changing) magic keyboard for the device you are unable to operate it in any fashion when it is closed and plugged into an external display. This is silly, as with a laptop open and plugged into a larger monitor I almost never default to the laptop as the “primary” display, and looking back from a 27 inch screen to a 11 inch one to open an app is just unintuitive for long periods. It gets even sillier when you try to either open control center or view your notifications on an external display. With the former, if you click the control center on the display it opens it on the ipad screen instead.

The later doesn’t even work on the display. It won’t let you do it. You need to move your mouse over to the iPad screen to do it, and then move the mouse to the top, and then keep moving it upward as if there’s more display above it. Right, speaking of, you cant unlock the device from the studio display either. The password, lock screen, etc, is all permanently on the iPad display. I can keep going on but i’ll be honest, I haven’t discovered all the problems yet.

3. Stage Manager is still limited

This is a shorter one than many might expect because I don’t actually dislike stage manager. I think its a pretty solid compromise, and I applaud apple for building a window manager with a touch interface as a primary input mechanism that doesn’t feel terrible to use at all times. I had no idea if it could be done. My issue with it comes less from its functionality and more its interface. So here’s a list of nitpicks inside a list of nitpicks. When you open a new app, by default it closes the old one. You need to drag an app / window on top of an old window stack in order to get two windows and there’s no way to change that default. 4 windows are the maximum you’re allowed to have in a single Stage Manager “stack” or whatever its called, which makes total sense for an 11 inch screen but is hilariously sparse for a 27 inch one. When you drag a new window onto a stack, it automatically jettisons the first one you put there in a FIFO format, which means you have to continuously cycle to get the stack you want, or have the foresight to move one of your other windows to the iPad or just close it.

This would be the least busy desktop UX ive ever seen. but this is the most Stage Manager lets you do.

The thing about stage manager is there’s a ton of potential there. I see it. Its so obvious. They just fix a few problems with it and they’ve built an excellent way for power users to handle multitasking.

4. You can’t update the display

The Studio Display is a weird gizmo because it has a CPU inside. Seriously! The thing comes bundled with a SOC, specifically an A13, which I assume is used for handling features such as the webcam’s Center Stage feature, and probably some form of audio processing for the speakers / microphone. Updates happen sparingly, but in the past 8 months of ownership I received 3 updates. The problem is that those updates happen through macOS. iPadOS, as far as I can tell, has no mechanism to deliver updates to the display. This is a shame, as although the Studio Display is an expensive device its also one that will last decades, and I was hoping I could grab one for my mother (who doesnt need a full computer) so she’d have a big display / keyboard to do light work on, as well as a place for me to dock my macbook when I need to visit for the week and get some work done.


I wrote this entire article using the two products together, and its obvious that the potential for this to be a fantastic experience is there, but since Apple moved the Mac line to Apple Silicon its something I care about a lot less. My iPad has since moved from an essential part of my computing lineup to a second fiddle to my Mac, with only a few features I wish my laptop had (such as an integrated cellular modem). My ReMarkable 2 has replaced it as far as reading articles / writing goes, and my Mac has caught up to it in battery life and portability. This is a shame, because I adore the iPad Pro line and want it to fit somewhere in my life, as well as for it to become the only computer my parents utilize due to its simplicity and security. Hopefully they can fix the above issues because otherwise this is a really cool experience that I would recommend towards those looking for a light / secure computing surface.

The End of Computing

Disclaimer: These thoughts are my own and not representative of my employer. These ramblings would have gone to a discord server i’m in but instead all of you get to see them. Lucky you! I didn’t edit or proofread this so expect it to sound like a long winded rant.

There’s this infamous concept called the end of history. it was proposed by Francis Fukuyama, and the main concept is that we have reached a point in human history in which we have gone through all major changes. Change would still happen of course but at a smaller rate. There would be no more walls falling, no further world wars. Regardless of how I personally feel about this concept in its original application (I am not a fan) I keep thinking about how it applies to the tech industry. I can’t stop myself from thinking that we’ve hit a similar point with computing. The revolutions of the personal computer, internet, and modern smartphone have had me wondering for years what the next evolution of technology would be. Smart glasses? Smart watches? Virtual reality? The Metaverse? Self driving? Crypto? AI? Every year this focus changes and changes again. All trends which materialized into nothing.

I am decently optimistic about the possibility of VR / LLM’s, but I used to be optimistic about smart glasses and crypto many years ago so I probably shouldn’t be trusted when it comes to that. Even now I find myself often more pessimistic about the technologies I used to be optimistic towards, as I find their implementations lacking / incomplete. Regardless, I think we’ve reached the end of computing. The big shifts already happened with the mac / windows, ios / android, and even if every one of the aforementioned trends actually do materialize into the next computing platform they arent going to transform society any more than the personal computer, internet, or smartphone did.

Even though I see myself optimistic about LLM’s such as OpenAI’s GPT, I see their potential implementations less as instruments of creation but instead refinement. I have been fascinated with how the tech industry specifically got self driving so very wrong, with its expanding problem space. While that never materialized into anything actually useful, its became a lodestar which led to the creation of new crash detection technologies. Assistance, not autonomy. I see LLM’s as destined for a similar path. They can write emails, but those emails are boring and lack personality. However I actually expect them to exist as a copilot to enhance and critique your writing, rather than simply replace it. Imagine a word processor with a built in copilot to your right, telling you that you need to improve your conclusion or that you’re rambling about LLM’s in the middle of a blog post that was supposed to be about phones or whatever.

I think while there’s new exciting technologies and new possibility, even if LLM’s materialize into the AGI that twitter grifters are foaming at the mouth to predict the timetable of, they won’t matter. We’ve already got a device in our pocket that connects us with everyone and can access the entirety of human knowledge. The best case outcome of this technology is just a mathematical model that reads summarizes search results for you. It’s transformative, but not world changing. Same with Smartwatches, its just a phone on your wrist. Virtual Reality, for as much as I adore it, has a massive fight ahead of it as long as their biggest friction is people who own these headsets not even wanting to put them on. The Metaverse as a concept already exists and is Discord, and anyone trying to build 3D virtual spaces is missing that 2D ones are better. Crypto? Can go on for hours about why that wont work.

A quote I keep thinking about is something Nilay Patel said on the vergecast once, paraphrasing here, “The iPhone was such a successful product that it reshaped society around it”. He’s right. The iPhone and Android reshaped society. The internet reshaped society. Personal computers reshaped society. LLM’s write essays and recombines information. I’m impressed, but not floored yet. I think it can finally be the foundational technology people have been waiting for years to build a new generation of products on… but i’ve been wrong before.

Discuss this with me on Twitter or Mastodon

The Tech That Made My Year

I have been obsessed with tech products my whole life, and I can’t imagine a world without them. Tech products are the first thing I think of when I wake up, and usually what I am thinking about when I fall asleep. My day job is my passion, and i’m so grateful to the world that this is something I get to spend the rest of my life thinking about and living in. That being said, I buy and consume a lot of tech stuff throughout the year. So I want to create a list of my favorite things that I was introduced to in 2022. From hardware, to software, to operating systems, to services, these are the products that made my year.


The Apple Studio Display

It’s sort of become a joke amongst my friends that I’m pretty obsessed with screens. I don’t know why this started. Probably during the pandemic, when I just blindly bought a TV and wondered why there was this white glow behind black text. I then looked at my phone and back to the cheap gaming monitors on my desk, and wondered why the color white on those monitors was either too pink or too blue. I also hated how pixelated everything looked on those monitors compared to my phone. Not for video editing, or anything, but for reading text! As a software engineer all you do all day is read text, and it might be placebo but blurry text had a horrible effect on my eyes while reading. I had to strain more often, and at the end of the day all I wanted to do was close my eyes and lay down. The studio display is pretty damn expensive compared to other monitors, even ones that are similarly specced and priced. They lack a bunch of features (limited to 60hz, single dimming zone / no HDR) that make many monitors better on a technical level. But despite this, they remain gorgeous. The color on the 5K screen always looks correct, regardless of whether you use a Mac with true tone or a Windows PC connected to it. The thunderbolt ports have yet to fail. The speakers sound phenomenal for a display. I can’t comment on the webcam much, as I use an Opal C1 instead, but half the time FaceTime uses the integrated one instead of my Opal and it takes longer than you’d think for me to realize that. It’s perfectly adequate now for video calls. As someone who predominantly works from home, this was the best quality of life upgrade I have ever bought, so much that I bought a second one.

BeReal

I never much loved photo based social networks. I wasn’t popular enough in HS for snapchat to be fun, and I became hot long after the Instagram grid became something people cared about. For me though, I just want a way that I can keep up with what my friends are doing. BeReal is the first social network that actually allows me to keep up with the people I care about on a regular basis.

1Password 8

1Password was always known for being a product that put the apple ecosystem first. It had apps for other platforms, of course, but the apps on the iPhone / Mac were always created with the upmost care. When they announced their switch to an Electron UX on macOS, many people were rightfully upset. They stated this would allow them to move faster and build a better product, which I was skeptical was true. Turns out that was true. I’ve told people for years that if they want to spend money on their digital security, 1Password was the best thing to spend that money on. This is especially true this year. The addition of secret key support has once again shot them years ahead of their competition, including the native password managers bundled with the operating system. I still prefer to use enclave keys for SSH into sensitive systems, but its once again become an indispensable product for me.

Arc

Arc has become so essential to my daily life I genuinely forgot about it when writing this list. It’s not something I think about as a product anymore. At first I was a skeptic, but it has since come to be my daily driver for both my work and personal web needs. I think it’s the right paradigm to move towards in web computing, combining the concepts of favorites with tabs, forcing ephemerality of opened tabs, and making web apps a prominent part of the ux. The team behind it are friendly and responsive, and I have gotten to a point where I greatly look forward to their patch notes each week. I can’t wait until it arrives on Windows so that I can recommend it to all of my friends, and especially until it arrives on iOS / iPadOS so it can be my complete daily driver.

The Steam Deck

I have had quite a few differing opinions on this device over the past 8 months I’ve had one. Let’s start with the bad stuff. Its giant, the size of 3 nintendo switches. In its carrying case it will take up an overwhelming amount of space in the front pocket of a backpack, and only meets the definition of handheld in so much as you can hold it in your hand. It is certainly not easy to transport, and find myself sliding my laptop in my backpack to bring to a coffee shop far easier than I place my deck in my case to bring on a plane. Deck verified games are not “fuss free” like they are on a switch. Not only do you need to fiddle with the performance settings to get a battery estimation above 2 hours on almost all new games, but the verification status is sometimes a lie as it doesn’t take into account updates which screw up the Proton compatibility layer. I downloaded Wolfenstein II to play on a flight, a Deck Verified game, and couldn’t get past the second level due to a GPU bug which made it so I couldn’t see unless I was facing in a single specific direction. The screen is also pretty poor given the price. Many will complain about the 720p max resolution, but that’s actually the only way games on it are playable, and I don’t really mind. What I do mind, is that the Switch OLED, a device that is 50 dollars cheaper than the lowest end Deck model, comes with a dock and wireless controllers, has a gorgeous OLED screen while my Deck sits with its outdated LCD display. This often makes the decision of where to buy games a lot harder than it should be, especially between the deck as a hardware beast and the Switch being a glorified android tablet that was out of date when it was launched almost six years ago. However, and this is a big however, from a hardware perspective, this is one of the most impressive and well designed products i’ve ever seen in my life. It’s giant, but it never feels giant when you hold it. The buttons are in the perfect place. The touchpads work incredibly. The fact that it can run Elden Ring in my hands is nothing short of a miracle. I booted up Final Fantasy 14, a game I will never actually get fully into, on both my PS5 and my Deck and after fiddling with the controls a bit, I preferred how it played on my Deck. I love that the Deck is an open handheld I can do anything on, including play the best sonic game ever made. For years I often said that the Switch is a “kindle for video games”, but I think the Steam Deck, with its massive backlog and library, fits this bill even more. I played Doom 2016 perfectly on a plane. I played Deus Ex while on a vacation. If the new MW2 ran on this thing, it would probably be the only handheld gaming device I ever used. This is not a device to get someone who is frustrated tinkering with software or even a person who is unfamiliar with Linux, but for those who are this is one of the greatest things you can buy.

The MacBook Pro (16 Inch, 2021)

This is, without a doubt, the greatest computer I have ever owned in my entire life. I’ve both owned and used a great many computers and absolutely none have consistently smashed my expectations as much as this machine has. It has redefined what a laptop is and can be. It has more CPU power than the desktop I use in my office at work, it has more power than the gaming PC I use at home. It has a better screen than anything else on the market which isn’t OLED. It has fantastic speakers. It has a great keyboard. It has a trackpad that puts every other laptop to shame. It has a battery life that absolutely smashes every other computer on the market. All of that alone would be incredible by itself, but even on top of that, i’ve never heard the fan run once. I’ve owned many MacBooks over my life, being an apple customer for close to 20 years, but this is the first time I splurged on one of the more expensive models and even then it is worth every single penny. It’s put my iPad to shame. It’s put every other computer I used to shame. It’s hard to recommend anything else.

Favorite Shows of 2022

I watch a lot of TV, mostly in the background while doing other things. Usually ill make a list of my favorite shows of the year and send it to my friends, but this year I have this blog, so I figured why not post it? Anyway, the rules are to be a “current year” show the first episode of that season / batch had to air in that year, so if episode 1 of a show premiers on December 31st 2022, it counts. I don’t count direct to streaming / tv films on here, so if you want my opinions on those, I’ve got a Letterboxd like every other zillenial struggling with their fading cultural relevance.

The List

1.) The Bear

Season 1, Hulu

The Bear came out of nowhere. My friend recommended I check out this show many times, but I didn’t listen until she sent me a tweet that compared it to Uncut Gems. It’s probably the most unique writing voice I heard in 2022 as far as television goes. It’s pretty par for the course to write a show about found family and belonging, but The Bear nails the mess of that in a way I haven’t seen before. Every actor is perfect. The music is perfect. The directing is perfect. Really nothing else I can say about this one.

2.) Andor

Season 1, Disney+

As someone who genuinely does not care about this franchise in the slightest anymore, I can’t believe that a Star Wars show is this high on my list this year. The Mandalorian Season 1 was fun TV reminiscent of a classic western, but this is on an entirely different level. Andor is a phenomenally written piece that focuses directly on the fascism that has been ever present in the franchise but used mostly as fodder for the protagonists to overcome. It’s enough for me to see something produced that’s so overtly anti-fascist, but the writing and characterization is some of the best on TV in years.

3.) Severance

Season 1, Apple TV+

Brilliant television. One of the most brilliantly written and best directed dramas in years. this feels like it was algorithmically cooked up in a vat of every science fiction show from the 2010’s that I fell in love with. It has the ambiance and visual design of Counterpart, the mind bending writing of Mr. Robot / Homecoming, and the mystery / world building of early Westworld. It’s been a long time since a cliffhanger of a show has hit me this hard. I can’t wait for season 2.

4.) Barry

Season 3, HBO

This show has wonderfully evolved from its primarily comedic origins into one of the best dramas still on television, while still having phenomenal wit. An incredible drama, and probably the first real spiritual successor I’ve seen to the crime drama of Breaking Bad, which unlike Barry quickly dropped its dark comedic roots in the second season. It also has the sharp critique of Hollywood of Bojack Horseman.

5.) Better Call Saul

Season 6, AMC

This show is better than Breaking Bad. I prefer Saul as a main character to Walter, I prefer the focus on the legal crime and love that I get to spend more time to explore my favorite characters from Breaking Bad. This season ends things wonderfully, and I highly recommend this series to everyone who enjoyed Breaking Bad.

6.) The White Lotus

Season 2, HBO

While the satire of the rich isnt as sharp as the first season it’s still the funniest show on TV. I’d argue its even funnier than the first, because Jennifer Coolidge gets the time she needs to shine here and delivers some of my favorite comedic performances of the year.

7. Cyberpunk: Edgerunners

Miniseries, Netflix

I’m obviously a sucker for science fiction, specifically well done cyberpunk settings. One of my favorite films is Blade Runner 2049 because of this. I was interested in the Cyberpunk game but didn’t pick it up on account of it looking like trash. This show on the other hand, is so excellent, it made me buy the game on steam at half price (still not worth it imo) just to experience the world and these characters more.

8. Harley Quinn

Season 3, HBO Max

Comedies in the modern age of TV have changed. Modern shows such as Dramas are almost 10 hour films, so the transition to linear narratives fit them like a glove. Sitcoms stories were serialized though, and that format benefitted them pretty well because they could just focus entirely on situational humor without dealing with character development. Upon watching it for the first time last year its pretty apparent that the show which changed everything was The Good Place, a sitcom so incredibly brilliant it left reverberations on the whole industry. Anyways, Harley Quinn focuses more on plot and character development than ever before in this season, and while that results in the rapid nonstop pace of incredible jokes from 1/2 slowing down, it ultimately benefits the longevity of the show and works a lot better than it did for What We Do In The Shadows.

9. Peacemaker

Season 1, HBO Max

The first breakout hit of the year. Genuinely didn’t expect peacemaker to be as good as it was, but James Gunn just knows how to frame these stories. They’re funny and not to be taken super seriously, but unlike the Whedon style of writing which dominated Marvel for the past decade, actually focuses on the characters and their personal stories. Gunn’s films (such as the excellent Guardians of the Galaxy vol. 2) usually focus on relationships, parents, siblings, and found family. Cena is also the star of the show, and this would not work in the slightest if it wasn’t for his absolute charisma as an actor here. I’d love to see him try a more dramatic role in the future, because there were some hints here that he might be up to the task.

10. Smiling Friends

Season 1, Adult Swim

the renaissance men are coming to town

11.) Righteous Gemstones

Season 2, HBO

Not as tightly written / clever as the first season but still a great and funny show. Eric Andre was born for this role and I adored him in it.

12.) Wednesday

Season 1, Netflix

A fun show about a magical boarding school that isn’t written by piece of shit TERF.

13.) What We Do in the Shadows

Season 4, FX

This show has teetered off from its incredible first two seasons and moved towards something thats slightly more narrative driven. I understand why it made that choice, but the first two seasons were the funniest show on television and it has recently been relegated to only one of the funniest.

14.) His Dark Materials

Season 3, HBO / BBC

Great finale! Pretty well done adaptation to an excellent series. I can’t believe that this was a book they tried to market to kids, and I wish I read it when I was younger.

15.) Game of Thrones: House of the Dragon

Season 1, HBO

I honestly went into this one sort of expecting nothing amazing, and until the last few episodes it really wasn’t. It had great acting and of course incredible production values, but was generally a snooze. When it finally reaches the conflict its building towards, it finally piqued my interest though. Solid watch if you enjoyed Game of Thrones in the past.

16.) Stranger Things

Season 4, Netflix

Season 1 of stranger things was a masterpiece of lightning in a bottle season of television. Stranger Things Season 4 isn’t. It’s still worth the watch if you’ve got a netflix subscription though, especially before they kick us all off password sharing.

17.) Star Trek: Strange New Worlds

Season 1, Paramount+

Not the best Trek show I’ve seen. But it’s a real Trek show, unlike everything else with the name on Paramount+. It understands the core of what Trek should be, at least a semi-optimistic show focused on exploration, philosophy, and the betterment of humanity. My main issue with it is that it is not unique or original in its setting or characters. I’d vastly prefer a new ship with a new crew over nostalgia baiting the enterprise again, but if they had to do this concept I think they’ve managed to pull it off as well as you can. If you’re a fan of the pre Kurtzman shows, you’ll like this at least a bit more than the other stuff he’s put out.

18.) The Boys

Season 3, Amazon Prime Video

I don’t really love this show as much as everyone else seems to. It’s good! but something about it has always irked me. I don’t love the acting (besides the notable exception of Starr as Homelander), I don’t like how the main guy keeps making the Tucker Carlson face every time a superhero has a weird fetish or their genitals explode, and more importantly I don’t actually care about any of “The Boys”, of whom are supposedly the focus of the show. They never grow, they never develop, and every season they sit in a basement somewhere and yell at each other while the actual plot is happening around them, usually independently of their Discord server like spats. I found myself just skipping those sections entirely or scrolling on my phone waiting for them to end. The show’s excellent political views are never explored more than the absolute bare minimum that’d allow them to market the show as a politically minded without being sued for false advertising, and without actually saying anything with more depth than a half length tweet. Regardless its still worth watching because its fine and you probably can watch it free anyways.

Penalty Box