Can AI help me plan my honeymoon?

Date:

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

I’m getting married later this summer and am feverishly planning a honeymoon together with my fiancé. It has been at times overwhelming trying to research and decide between what seem like millions of options while juggling busy work schedules and wedding planning.

Thankfully, my colleague Rhiannon Williams has just published a piece about how to use AI to plan your vacation. You can read her story here. The timing could not be better! I decided to put her tips to the test and use AI to plan my honeymoon itinerary.

I asked ChatGPT to suggest a travel plan over three weeks in Japan and the Philippines, our dream destinations. I told the chatbot that in Tokyo I wanted to see art and design and eat good food, and in the Philippines I wanted to go somewhere laid-back and outdoorsy that is not very touristy. I also asked ChatGPT to be specific in its suggestions for hotels and activities to book. 

The results were pretty good, and they aligned with the research I had already done. I was delighted to see the AI propose we visit Siargao Island in the Philippines, which is known for its surfing. We were planning on going there anyway, but I haven’t had a chance to do much research on what there is to do. ChatGPT came up with some divine-looking day trips involving a stingless-jellyfish sanctuary, cave pools, and other adventures. 

The AI produced a decent first draft of the trip itinerary. I reckon this saved me a lot of time doing research on planned destinations I didn’t know much about, such as Siargao. 

But … when I asked about places I did know more about, such as Tokyo, I wasn’t that impressed. ChatGPT suggested I visit Shibuya Crossing and eat at a sushi restaurant, which, like, c’mon, are some of the most obvious things for tourists to do there. However, I am willing to consider that the problem might have been me and my prompting. Because I found that the more specific I made my prompts, the better the results were. 

But here’s the thing. Language models work by predicting the next likely word in a sentence. These AI systems don’t have an understanding of what it is like to experience these things, or how long they take. For example, ChatGPT suggested spending one whole day taking photos at a scenic spot. That would get boring pretty quickly. The AI systems of today lack the kind of last-mile reasoning and planning skills that would help me with logistics and budgeting. It also suggested accommodations that were way out of our price range. 

But this whole process might become much smoother as we build the next generation of AI agents. 

Agents are AI algorithms and models that can complete complex tasks in the real world. The idea is that one day they could execute a vast range of tasks, much like a human assistant. Agents are the new hot thing in AI, and I just published an explainer looking at what they are and how they work. You can read it here

In the future, an AI agent could not only suggest things to do and places to stay on my honeymoon; it would also go a step further than ChatGPT and book flights for me. It would remember my preferences and budget for hotels and only propose accommodation that matched my criteria. It might also remember what I liked to do on past trips, and suggest very specific things to do tailored to those tastes. It might even request bookings for restaurants on my behalf.

Unfortunately for my honeymoon, today’s AI systems lack the kind of reasoning, planning, and memory needed. It’s still early days for these systems, and there are a lot of unsolved research questions. But who knows—maybe for our 10th anniversary trip? 


Now read the rest of The Algorithm

Deeper Learning

A way to let robots learn by listening will make them more useful

Most AI-powered robots today use cameras to understand their surroundings and learn new tasks, but it’s becoming easier to train robots with sound too, helping them adapt to tasks and environments where visibility is limited. 

Sound on: Researchers at Stanford University tested how much more successful a robot can be if it’s capable of “listening.” They chose four tasks: flipping a bagel in a pan, erasing a whiteboard, putting two Velcro strips together, and pouring dice out of a cup. In each task, sounds provided clues that cameras or tactile sensors struggle with, like knowing if the eraser is properly contacting the whiteboard or whether the cup contains dice. When using vision alone in the last test, the robot could tell 27% of the time whether there were dice in the cup, but that rose to 94% when sound was included. Read more from James O’Donnell.

Bits and Bytes

AI lie detectors are better than humans at spotting lies
Researchers at the University of Würzburg in Germany found that an AI system was significantly better at spotting fabricated statements than humans. Humans usually only get it right around half the time, but the AI could spot if a statement was true or false in 67% of cases. However, lie detection is a controversial and unreliable technology, and it’s debatable  whether we should even be using it in the first place. (MIT Technology Review

A hacker stole secrets from OpenAI 
A hacker managed to access OpenAI’s internal messaging systems and steal information about its AI technology. The company believes the hacker was a private individual, but the incident raised fears among OpenAI employees that China could steal the company’s technology too. (The New York Times)

AI has vastly increased Google’s emissions over the past five years
Google said its greenhouse-gas emissions totaled 14.3 million metric tons of carbon dioxide equivalent throughout 2023. This is 48% higher than in 2019, the company said. This is mostly due to Google’s enormous push toward AI, which will likely make it harder to hit its goal of eliminating carbon emissions by 2030. This is an utterly depressing example of how our societies prioritize profit over the climate emergency we are in. (Bloomberg

Why a $14 billion startup is hiring PhDs to train AI systems from their living rooms
An interesting read about the shift happening in AI and data work. Scale AI has previously hired low-paid data workers in countries such as India and the Philippines to annotate data that is used to train AI. But the massive boom in language models has prompted Scale to hire highly skilled contractors in the US with the necessary expertise to help train those models. This highlights just how important data work really is to AI. (The Information

A new “ethical” AI music generator can’t write a halfway decent song
Copyright is one of the thorniest problems facing AI today. Just last week I wrote about how AI companies are being forced to cough up for high-quality training data to build powerful AI. This story illustrates why this matters. This story is about an “ethical” AI music generator, which only used a limited data set of licensed music. But without high-quality data, it is not able to generate anything even close to decent. (Wired)  

Share post:

Subscribe

Popular

More like this
Related

모듈식 모터 및 기어박스로 제품 개발이 간편해집니다.

후원자: 맥슨의 Parvalux.경쟁에서 승리하려면 엔지니어는 개발 시간을 단축하고 제품...

Draganfly, 병원 드론 배달 개념 증명 비행 완료

Draganfly는 Brigham 장군의 개념 증명을 통해 드론이 의료 분야의...

2024년 기후 기술 상위 10개 스토리

2024년에는 기후변화에 대처하는 기술 전기를 생산하는 연을 타고 구름...

Microsoft의 AI 생태계가 Salesforce 및 AWS를 능가하는 방법

AI 에이전트 일반적으로 사람의 개입이 필요한 작업을 수행하도록 설계된...