I keep a list of ideas for things that might be useful. I know that I'm never going to do much with many of these, so I might as well publish them.
Ultimately, however, ideas aren’t the limiting factor. Doing things is the limiting factor.
I call this the Open Ideas List. (Publish your ideas on your website!)
Not all of these ideas are necessarily feasible, but that's not the point. Read Idea Sex before proceeding.
Please let me know if you're interested in any of these ideas: 1) I might have more ideas for you; 2) I'm interested!
-Chris. 23 August 2020
Ideas are sorted subjectively best to worst.
Instagram, twitter, Facebook can decide what psychologically manipulative tactics they want to use against us. And because we can’t control the view (GUI/client) and notifications for these platforms, we have no defenses against this. Let’s change that.
Something I’ve heard from people trying to quit or limit social media use is “But I still want to stay in touch with [actually possibly legitimate or health use of social media]“.
Sometimes the solution is simply to make another account and only follow a few core people. But maybe we can do better.
Have some kind of app that allows you to view facebook/ instagram/ twitter/ discord/ youtube/ whatever, and it lets you remove all of the addicting features. Twitter won’t show numbers. Facebook won’t show recommendations. You have control over notifications.
Could also be an option for only allowing social media access during a certain schedule.
An option to have the app shut down after X minutes or Y scrolls. (And it doesn’t even need to be a hard shut down—even shutting down for 10 minutes would probably be incredibly helpful.)
What would be helpful is some kind of very precise way to get notifications. Designate a few chats or groups that are actually important, and that way you never have to check the social media platform, and you also don’t have to have general notifications on.
Could sell this as some kind of subscription. Maybe subscription brings more customizability, and a free version to allow limited use to social platforms.
On a technical side, it really might not need to be more than a browser with some precise HTML blocking (access the social media apps on the web), and some APIs to handle the customized notification system.
Hm… I’m not sure whether this is genuinely helpful though. Is the best solution overall “no social media” and this is just an enabler? Hm.
AI already exists that can grab depth information from handheld smartphone footage. You can use this to scan real life spaces into a virtual reality environment.
Here's one such example ML model that converts video footage into a depth map: Consistent Video Depth Estimation. (Video.) Of course this isn't perfect nor does it do everything required, but this alone is a large chunk of the solution for converting real spaces into VR spaces.
I suppose this is an inevitable invention for mainstream VR. But I can't tell whether or not this exists already. I think I found some tangentially related patents though, but I’ve lost hte link.
(This idea to convert real spaces into VR spaces with AI was inspired by an idea I wrote further down on this page: "Google Street View, except for touring houses and apartments" [And have since consolidated into this idea]. — This cross-polination is exactly why I write down all of my ideas no matter what.)
Ideally the processing would be so robust that 'panoramic video' footage could be taken from anyone with a smartphone, even if the camerawork is shaky.
Hm would this be useful for walkthroughs of large buildings, or anywhere other than renting/selling houses and apartments?
Well frankly this already exists in the form of less-practical ‘photospheres’, but as it is it’s already pretty functional and I think the opportunity cost on working on this would probably be negative, relative to anything else that someone that could work on this problem could work on instead.
I notice that a lot of my peers don't know how to cook. This is a possible solution.
App where you start off with a few basic recipes. Then you rate your dishes. Maybe you can even enter in the app what was wrong about the dish and it could suggest imnprovements.
Over time you "unlock" new recipes. I think this is a nice solution to the otherwise overwhelming nature of cooking and recipes—there's so many recipes and no filtering. What is needed is simplicity, and the cutting down of options. A path.
Could break cooking down into different skills and do the advancement that way.
And then turn it into a social network where whenever someone 'levels up' it gets shared with their friends.
Maybe too it would become the dedicated platform for asking for help with cooking.
Would have well-designed recipes and/or cooking videos.
Could also combine with a food-delivery company and then you can select what meals you want to cook some week and the ingredients are delivered to your door, or automatically imported into Prime Now, or it just makes you a grocery list.
Revenue from… premium subscription plan? Mere patnerships with grocery delivery companies? Access to a tele-chef assistant? Ads / no-ad subscriptions?
What would also be cool is if the cooking videos were interactive. Where they don't proceed until you say "OK, next". You can also say "go back", "What was that?", "How much butter?". (And, it would actually work well, rather than being fumbly.)
And then during in-between steps it tells you some other things you can get ready for the recipe to be useful later. Allows for non-linear cooking.
Pick a topic on Wikipedia. Put it on a graph. Go through all of the links in the main text on the topic's Wiki article. Connect those to the first node on the graph. Do this another few times.
Now you have a network of everything that's related to a specific concept.
I'm not sure what this would be useful for (learning what to learn in a topic?), but it seems like it could be really interesting.
Or, imagine a system that breaks down all of the prerequisite knowledge to understand a complex topic. You could mark every related concept which you understand, and it could tell you exactly what Wiki articles you need to review before understanding the topic at hand.
Do the above for every article on Wikipedia. See if there's interesting connections or data gleamed.
Maybe this could be used to objectively judge how "distant" two concepts are.
What websites does X website link to? Show it on a graph!
I found this thing that visualizes subreddits specifically. Demo 1. 2. 3. GitHub.
Here are some other things that I've found that might be related. I haven't looked into these much yet:
Connecting every bit of knowledge: The structure of Wikipedia's First Link Network —— (PDF)
Research:Wikipedia Knowledge Graph with DeepDive
Graph Website Links With Neo4j
Link Structure Graphs for Representing and Analyzing Web Sites
The idea is making this into an interactive website.
Something that the variants of COVID has got me thinking: what if we could engineer a weaker form of a virus? Maybe there are optimizations to 1) increase the memory of the immune system; 2) increase its contagiousness (to out compete the current virus).
If almost everyone will be infected, it’s better to have control over what they will be infected with.
Yeah this could backfire.
But maybe it would simply be easier to develop mRNA vaccines quicker.
Still, I wonder if there’s any way like this to create ‘viral immunity’…a ‘viral vaccine’…
Also, is it possible to have one variant of a virus preclude another from replicating? It seems that the ‘UK Variant’ (as it’s being called in 2020 mid December) outcompeted the previous variant in just a few weeks. How could this happen? If someone is infected with both forms, does one spread to others more easily than the other?
AI that can take in webcam footage and do better compression because it knows what faces look like in general and knows what to expect. Maybe it even becomes accustomed to your face and can make specific optimizations.
Thus it improves video quality while limiting bandwidth. Can use the same for anything, really. I guess we use ML algorithms/AI to replace manual-calculated signal theory when there are patterns that we just can't see. You could transmit someone's voice or face with far less data (I imagine) if you already have a model of the expected possibilities of how they might look or what they might say. AI-powered compression algorithms.
The AI would learn what faces or voices in general look or sound like to be able to do better compression, and/or it would learn the faces and voices of individual people. Over time it might become better at transmitting your face as it figures out what you look like.
Oh, maybe this exists somewhat? At least for data, not faces. The field of AI compression (or ML in general) definitely has not developed completely for this.
This probably would be of great interest to all video, actually. YouTube is mostly people’s faces...
I’ve recently started university, and I’m quickly realizing just how inefficient education is. Much of what is said isn’t necessary, and much of time spent isn’t benefitting most of the people in the room. But of course the only way around this is to have completely individualized teaching. This is impractical now, but perhaps it might not be soon.
Perhaps an AI that asks a student questions, and from the student’s answer, the AI can gauge what the student doesn’t understand and teach them (or at least recommend the right resources or example problems). Very tight feedback loops are the goal.
The AI would also be able to say how well the student has learned a topic, and what they need to improve on within the topic.
I’m not sure whether this would require capable artificial general intelligence, if so, I’m not sure it’s worth building or thinking about yet.
But perhaps this doesn’t require AGI. The only trouble is that before an AI teaches a student calculus, it itself needs to ‘learn’ calculus, and how do you do that…
Tiny little stickers you put slightly over the lens of your camera. They would take up a little bit of space on the camera to show a website url, line or text, emoticon, or something like that. Ex: something fun to put during zoom calls.
Not sure how to get things commercially printed on the (10-micrometer?) level required, though.
You could also do this with software, but something seems different about doing it that way.
Update: actually I don’t think this would work, the decal would be likely totally out of focus.
Some sort of noninvasive brain monitoring that can tell whether or not someone likes a piece of music, and automatically adjust the music based on that. (Whether it's actually controlling an AI generating music, or merely picking other music)
Company that does everything for individuals that want a great work-from-home computer setup.
Would sell packages. Would make everything as simple as possible.
Maybe come and set it up for you.
Sell to individuals, or sell to companies for employees?
Definitely an expensive-but-we-do-everything type of product.
Could build customers professional Zoom setups. Get good lighting, get a good camera. Make everything just work.
Hm, do people shake and speak more while having a nightmare asleep?
If so, detect shaking and wake up the user with buzzing or sounds.
Might be best suited as a watch app on an existing platform, but I know very little about this.
I am looking forward towards lab-grown meat, but I don’t understand how muscle meat can be grown without the muscles never actually having been flexed or used. Maybe ultrasonic treatment to the muscle could do it ¯\_(ツ)_/¯.
Maybe something like this, and then use it for approximating Navier-Stokes or something like that. At least that's what the article made me think.
Such that the professor/TA is not included in the stream that’s produced.
Note that the algorithm for this already exists! (I just can’t find any information on how much compute it requires… I doubt it could work in real time right now.)
It could work on the side of the professor’s technology, or as a Zoom bot (if those exist).
(Google Scholar for Targetted Memory Activation)
By cueing memories and associations during sleep (ex: the location of an object is paired with a sound, and the sound is played while you're in deep sleep), recall can be significantly increased. (What %?)
Here's a video about Targeted Memory Reactivation.
I use an open source program called Anki to remember associations, processes, facts, et cetera, via flashcards. Generally, every time you successfully recall something, Anki will test you about something on ever-increasing intervals (spaced repetition).
Say you want to remember "the hippocampus writes memories to long term storage". First you’d make an Anki flashcard, "Hippocampus // writes…", and associate that with a hippopotamus sound.
What about an app that hooks into/works like Anki, and at night when you're asleep, it plays the associated sounds of new flashcards?...
But, well, SRS is already extremely time-efficient… But I suppose, however, that this could allow someone to have millions of cards in Anki (otherwise requiring ~thousands of reviews a day). But at that point I don't think it would be harmless to sleep—TMR is hijacking something the brain is already.
With an EEG or other, is it possible to measure the brain's "response" to an attempt of TMR? This way it could be known whether or not the user actually remembers the thing being cued. It could be observed when someone forgets something...
Just, how can it be made more practical?
An idea my dad had:
An app like Waze— recognizes what town or state I am in, and narraates a brief history of the area as I pass through. I think a lot of people Who take long drives passing through many towns would enjoy to listen to the history of their current location.
Would probably be easiest to narrate the section of the area wikipedia. Only tricky part would be figuring out what to read on the page (ex: in proportion to how long you’re there).
Find a window bugscreen and look far away through it. Keep your head still. How well can you see? Not very.
Now move your head constantly, and look far away. It's a lot sharper, right?
But at any point in time, your eyes are receiving the same amount of information than when you were standing still. The extra information that causes the sharpness while moving comes from the information across time.(Across the "frames" that the eye experiences, to compare the eye to a camera.)
When the viewer is moving, there is extra information than when it's stationary.
So make an AI model that can take moving video and interpolate that extra information across frames into each frame individually, making the video "superresolution".
This might already be possible using this model. (Here's a video showcasing this model.) Perhaps the model could be "told" that the obstruction in this 1080p video is a between every pixel, and thus it could interpolate the pixels in between.
(More specifically: upscale, for example, a 1080p video to 4k, such that each pixel is now 4 pixels. And then cover that video with every-other-pixel alternating lines of black. This results in a video where every pixel is "spread out" from the others, and 75% of the pixels are part of the black lattice. This is just one way of doing it maybe.)
I tried reaching out to two authors, twice, about this: no responses.
ex: https://rev.com/freelancers
This might already exist. But if transcription is alreday completely manual, I imagine it would be far cheaper to first use AI to 1) transcribe audio; 2) time-align audio; 3) possibly even communicate areas of uncertainty where a human should definitely step in.
Then a human transcriber comes in and fixes the transcription.
And then, of course, use the correct transcription to train the AI until less and less manual work is necessary.
Some kind of task graph where each node is a task and tasks are connected to other tasks by arrows. Arrows indicate the sequence at which things must be done.
Time would be a constant direction (downward, rightward, whatever) in the graph.
Tasks would be able to branch into parallel task branches.
What it could look like:
(Rightward is time increasing.)
I feel like this is something we all have in our mind while working on a project, but it's never made explicit. Expressing a graph like this is really hard to do in text, and so we never think so show this thing that's in our heads.
AI to make a 3d image using two very differently placed cameras in a scene.
This could be done from two photographs, or I suppose two live cameras too.
Update: pretty sure this exists now. (And pretty sure it just approximates from one camera perspective.)
Crowd-sourcing except for applications or games. Nothing too big. For individuals wondering whether there would be public support for creating a certain product, service, art, etc.
Anyone that likes the idea puts down some money to make the project real (and maybe to get some kind of perk, but not required). The money is taken and held by the platform, but not given to the creator of the campaign until the project is finished.
The backers decide if the project is finished? Or maybe they all vote, and if 80% or 90% agree, then it's done and dispenses the funds. (80% or 90% of the funders, weighted by their contribution.)
Or this could be on an individual level, where every person decides whether they want to dispense the funds. But maybe the funds only dispense if 80% or 90% of the total backers agree that the project is done.
Or maybe part of the money could be paid to the creator. Or used to back up a loan that the creator takes on, so they're still liable for scams.
Sharing excess capacity
Rent anything out that you have, but rarely need to use.
Trailers for cars; camping equipment… uh
Hm but why wouldn't you just do this with a physical store, so no one has to own these things?
If it was a physical store, too, then delivery could be taken care of.
Also who would want to actually charge their neighbors? It can't be local but it has to be local… maybe a physical store is a better idea.
OK, back to first-principles: What is an underused item that some people have and there is demand for other things?
Or it could be renting anything.
I'm not sure there's a large market for "renting random items (from random people)"
Not my idea. This is something that had research started by NASA (in 2004?). I haven't researched this in a while, but apparently someone—Arnav Kapur— is pursuiing this now. TED Talk
I think this is really cool, and perhaps clunky now, but maybe it could develop to be widespread-useful, I'm not sure why not. I'm not sure how long it will be useful in the interim-period to capable brain-computer interfaces, though.
Maybe the reason it didn't take off 15 years ago is because AI & wearables have only recently caught up to this task.
Wikipedia see popular interest of people by the number of visits to a page
Could go further maybe if there's data on what people search afterwards. But I don't think there's data on this, there doesn't appear to be any analytics on wikipedia.org pages.
When someone buys a single track/piece/whatever from an artist on whatever website, a popup shows before they complete the order asking them to support the artist Patreon—per month, cheaper than the thing they were going to buy—and then send the thing the person was going to buy for ‘free’.
I tried to contact Patreon about this and they said they‘d take the idea and maybe they‘ll do it.
Update: This exists now!
AI that approximates what's outside of the frame of a video. In any moving video there's lots that goes out of frame, but can be approximated. We know what is outside of the shot of a camera, so teach an AI to do that too.
Train the model on moving videos that have been cropped smaller.
Also make sure the model has an accurate model of what it doesn't know. Don't want it hallucinating.
There was a study a few years ago that found that turning on a midsize car takes the same amount of gas as 7 seconds of idling.
[Paper]
"These data suggest that idling accounts for over 93 MMt of CO2 and 10.6 billion gallons (40.1 billion liters) of gasoline a year, equaling 1.6% of all US emissions." (Source)
In early 2020 I tried to contact Honda and Toyota about a way to try to stop this. My idea was that educating people “don’t idle please” probably wouldn’t work, many Americans don‘t care it seems, but software could. I tried to get these two companies to add a pop-up to their car, triggered by several minutes of idling. The message would be something very roughly like “Idling releases toxic gas that you and those around you are breathing. Idling for even short periods of time wastes gas and ages your engine. Please consider turning off your car.”
I wasn’t able to get through to Toyota, but I spoke to the head of sustainability at Honda and he politely said he didn’t care.
At which point I gave up.
Retro-rationalization: I suppose the real solution isn’t a pop-up just as much as the solution isn‘t education. The real solution is electric cars. It’s still a shame though.
Copy handwriting AI. Take a photo of handwriting sample. Give it a script. It generates it in the handwriting of the given sample.
Do this on the level of generating a font from handwriting sample, or on the more-precise level of generating an entire imagine?
Mail your mail to a special address that will digitize all of your mail. Maybe this already exists.
Probably useful if you live abroad. Could do this in multiple countries
Not sure if it's legal
Could also send any paper notebooks you have to this service and they would scan them for you.
To make sure that the enormous spoils of AI and AGI are shared, there could be a precedent by the supreme court (or however it works, I'm not sure), that artificial intelligences (AIs) or artificial general intelligences (AGIs) have their own right to property. Actually that doesn't make much sense.
Declaring that AGIs are conscious, but their property goes to everyone… hm. Well nevermind about this specific plan.
Mind map a topic, with all of the connections between concepts, and then automatically make each connection into its own card to be studied.
Maybe there's an addon that already exists for this
Measures average external sound level during the day, and records how you change the volume during the day. Learns your preferences.
Might be patented? Google has an "Adaptive volume" feature like this in Pixel Buds (2020).
This would also free up the remote to be used for speed control.
The quality of audio from videos at 3x speed or higher on YouTube far, far exceeds the quality of sped-up podcasts and videos everywhere(?) on iOS. I'm not sure why. (See speed-listening.)
Apparently meerkats have at least part of a language. Maybe, for fun, we could use machine learning to decode their language and speak back.
Actually gold isn't magnetic, nor silver. Maybe palladium or platinum?
But yeah I have no idea what a magnet on a ship would pick up. I spent 30 seconds Googling this and found nothing.
Maybe this is useful for inspiring something else.