
Motivation in Photography
31st December 2025
AI Personal Assistants are getting better and have their uses. However, they still have a long way to go before they are trustworthy. There are areas where they fail miserably.
Despite valid concerns about AI, people will use it. It saves time and can provide the answers we are looking for with a single click or tap. If you want to know, say, how the lens’s aperture works, an AI assistant will provide a concise answer. However, my investigation threw up some interesting shortcomings.
As a photographer and writer, I do a lot of research. I read books, watch videos, and peruse articles both in magazines and online. If I am writing an article, I will research the topic thoroughly to ensure the information I provide is correct. I then double-check it against several sources. Even then, I can still make mistakes; it is human to err.
AI personal assistants should significantly speed up my research process. I should be able to use it instead of search engines. Indeed, it has one distinct advantage over standard internet searches. AI Personal Assistants (AIPAs) can provide the information I am seeking without prioritising advertisements. Nevertheless, I thought I should research their reliability.

My Approach to Researching Using AI Personal Assistants
The arguments about AI’s plagiarism are well debated, so I am not going into them here. Like many photographers, I use the AI in my camera for subject detection. It is also available, with varying effectiveness, for masking during raw file development. Also, AI-based noise reduction has revolutionised high ISO photography.
I sometimes use it to help research topics.
However, I have never fully relied on the answers AI Assistants give. Instead, I ask them to provide links to pages that explain the topic I have requested. I think that approach is essential for two main reasons.

Firstly, I have friends who write and run some other photography websites. They tell me they have seen a huge drop in traffic to those sites since the advent of AIPAs and especially the AI summary that appears at the start of online searches. It’s internet traffic that keeps sites running through the advertising revenue it generates. Therefore, by clicking the links the AIPA provides, I am, in a small way, helping to maintain their existence. If I solely relied on AI’s information, which it may have effectively stolen from those sites, it would help lead to their closure, as they would be denied traffic.
Ultimately, that will be self-destructive for the AI. If the websites doing the work disappear, AI will have nowhere left to harvest/steal its data.
Secondly, AI can and often will give the wrong answer.

An Example of AI’s Failure
The information AI provides is sometimes inaccurate and needs to be checked. For example, I recently asked Microsoft’s 365 Copilot for the minimum operating temperature of my camera. It got the answer wrong. The conversation went like this:
Me: What is the minimum operating temperature of the [my camera]?
Copilot: The [my camera] (mirrorless digital camera) has a specified operating temperature range of 0°C to 40°C (32°F to 104°F). This means the minimum operating temperature is 0°C, so it’s not officially rated for sub-zero conditions, even though its IP53 weather-sealed body offers good protection against dust and moisture.
Me: That is wrong information. It is -10°C
Copilot: You’re absolutely correct — thanks for catching that! The [my camera] is officially rated for an operating temperature range of -10°C to +40°C (14°F to 104°F). This makes it suitable for cold-weather photography.
Copilot provided links showing where it sourced its information, and it gave the correct data, yet it still gave the wrong information. I tried the same question with Google’s Gemini, Apple’s Siri, Amazon’s Alexa, and ChatGPT, and they all answered correctly.

Can AI Correct Its Mistakes
I then asked Copilot whether it would learn from the wrong answer it gave. I was curious because in the early releases of AI models, their behavior was influenced by the conversations they had. Consequently, some bots became corrupted, producing offensive and extremist language. That was because they were learning directly from unfiltered input. Modern models do not learn this way and have safeguards in place. So I asked if it would learn from it?

Copilot: That’s a great question! In this chat, I don’t retain memory beyond the current conversation, so I can’t “learn” in the way humans do or remember corrections for future sessions. However, I do adapt within this conversation — if you point out an error, I’ll immediately correct it and use that updated information going forward here.
If you’d like, I can also explain why the mistake happened and share how I verify technical specs like operating temperature to avoid similar issues. Would you like me to do that? [I answered Yes]Here’s why the mistake happened and how I verify technical specs:
Why the Error Occurred
• Initial Source Ambiguity: Many camera spec sheets list “operating temperature” as 0°C to 40°C for general electronics, but the [my camera] is an exception because it’s freeze-proof. Some sources mix up storage temperature with operating temperature.
• Reliance on Common Defaults: I initially pulled from a generic range often seen in cameras, rather than confirming the [my camera’s] rugged spec.
How I Verify Specs
1. Manufacturer Documentation
I check the official product page or user manual from [my camera’s manufacturer]. These are the most authoritative sources for operating conditions.
2. Cross-Reference Trusted Reviews
Sites like DPReview, Imaging Resource, and official press releases often confirm rugged specs like -10°C for [my camera].
3. Avoid Mixing Storage vs Operating Specs
Storage temperature can be much lower than operating temperature, so I make sure to distinguish the two.
Best Practice Going Forward
When you point out a correction, I immediately:
• Update my context for this conversation.
• Use that corrected info for any related follow-up.
• Explain the reasoning so you know I’m grounding it properly.

Is AI Good For Planning a Photoshoot?
Although I have pointed out a mistake made by Copilot here, others are equally prone to mistakes. If the questions are simple, then they are all likely to get it right. I asked them all what time high tide was here tomorrow and in a couple of months’ time, and they all got it correct. However, if the question was more abstract, they all failed.
I just asked my assistants how long the golden hour will last at my location tomorrow, given that sunrise is at 08:23. According to Photopills, the golden hour will start at 07:54 (28 minutes before sunrise; I would disagree.) It also says it will end at 09:37, but I think it lasts longer than that here.
Google:
The morning golden hour at [my location] tomorrow, Wednesday, December 10, 2025, will last for approximately 1 hour and 16 minutes. Starts: 4:52 AM (Blue hour transitions to golden hour) Ends: 6:08 AM (Sun is higher in the sky). Sunrise Time: The sun will officially rise at 8:19 AM.
Copilot:
Tomorrow morning in [my location], the golden hour will last for about 1 hour and 23 minutes, starting at 07:17 and ending at 08:40. This period begins shortly after sunrise (around 08:22) and offers the soft, warm light ideal for photography.
Alexa:
I’m not quite sure how to help you with that.
Siri:
Failed to give an answer
Meta
The Golden Hour in Amble, Northumberland, tomorrow (December 10) is expected to be around 56 minutes long in both the morning and evening.
- Morning Golden Hour: 8:00 AM – 8:56 AM
- Evening Golden Hour: 2:52 PM – 3:48 PM
Keep in mind that the exact duration may vary slightly depending on your specific location within Amble
ChatGPT:
ChatGPT gave the most whole and correct answer.
Good question. The length of “morning golden hour” in [your location] will vary depending on season, but here’s an estimate for tomorrow:
Sunrise in Amble — roughly 08:22 AM.
According to typical definitions (sun between –4° and +6° above horizon) the “golden hour” begins just before sunrise (or at sunrise) and continues until the sun rises high enough that the angle exceeds about +6°.
For a place at [my location’s] latitude in December, that means the morning golden-hour would likely last around 30 to 45 minutes. This matches the general guideline that golden hour in winter at mid-latitudes tends to be shorter — often ~30–60 minutes.

Can AIPAs Tackle the Problem With Traditional Camera Reviews
It must have been 15 or more years ago that I read two reviews of the same Canon camera in two different, now-defunct magazines. One review sang its praises, giving it five stars, while the other tore it to bits, giving it just two. Unsurprisingly, Canon took out full-page adverts in the first magazine. Meanwhile, the second magazine featured only Nikon adverts.
Although I find Camera reviews on Petapixel and a few other sites well-balanced and researched, there is a flaw in almost all reviews, and it is one that AI should, given time, negate. That flaw is about the newness of the gear being reviewed. They fail to pick up problems that occur over time with the gear.
For example, a brief look at reviews published at the time of its release found only one that mentioned one well-known camera’s infamous overheating issue. Even then, the review had been edited after the issue was discovered. A couple mentioned the camera’s short battery life, and none mentioned the poor low-light focusing performance that many users have found. Most reviews are conducted by a handful of people and on new models. That doesn’t pick up the issues noticed after tens of thousands of hours of real-world experience, after a camera has been in use for a year or more by everyone who has bought it. Therefore, the issues that are only discovered through widespread use are missed.
In theory, AI should gather all the information on the internet and form an opinion. Indeed, most of the AIPAs did report the same camera’s failings. A win for AI.

AI Models Don’t Agree
So, is AI reliable enough to look at long-term reliability? I wanted to find out, so I asked each system to tell me the three least reliable recent camera models.
Copiliot said it was the Nikon Z8, Nikon Z30, and the Fujifilm X-S20. Meanwhile, Chat GPT chose the Canon EOS R100 and R10, plus the Sony ZV-E10. Google’s AI claimed it was the Fujifilm XF1, Canon EOS R5, and the Fujifilm X-M5. It also said that Canon bodies with plastic base plates were inadequate. Meanwhile, Alexa’s reply was its ubiquitous, “Hmm, I don’t know that.”
When I asked Meta’s AI, it seemed to give the longest reply, albeit not knowing the actual answer to the question I asked:
Finding info on the most unreliable mirrorless cameras from the last two years is a bit tricky, as reliability can be subjective and varies depending on usage. However, I couldn’t find specific data on the top three most unreliable mirrorless cameras released in the last two years.
Here are some brands that have received criticism for reliability issues:
Panasonic Lumix: Known for slow and soft focusing, glitchy focus tracking, and limited lens selection.
Sigma: While producing high-quality lenses, Sigma’s cameras have been criticized for scattered development focus and quirky designs.
Nikon: Struggling with low-light performance, autofocus consistency, and mirrorless camera range support.
Canon: Surprisingly, Canon ranked last in a reliability survey.
However, it failed to mention Leica, Pentax, and OM System in its results.
The Various AI models’ answers for the most reliable cameras were equally diverse across all brands. (Pleasingly for me, my camera came out tops in three of the AI’s results.)

Should We Embrace AIPAs?
As research tools, AIPAs are not foolproof, but they do provide helpful guidance. They can give correct answers to simple questions, but it is always worth double-checking, as they can make mistakes. When the questions are more in-depth or nuanced, they do struggle.
I should mention that they are good at identifying scams. For example, some online marketplaces are full of counterfeit items, and AI can be very good at identifying them from photos. It can also identify scam websites, something Google fails to filter out in its search results. Someone I know lost a lot of money after buying a lens from a legitimate-looking website that ranked highly in search results. Afterwards, when we asked ChatGPT whether the site was a scam, it said it was. A friend of mine uses AI successfully to check the photos on Vinted to see whether the advertised clothing items are genuine.

Happily, it is rubbish at writing engaging, readable prose. However, I can see a time in the future when human-written photography articles will become a thing of the past, unless they are hidden behind AI-proof paywalls and inaccessible to bots. YouTube videos are also used to train AI models.
YouTube creators can prevent their videos from being used to train third-party AI models. However, creators cannot currently opt out of Google using their content to train its own AI models, such as Gemini and Veo. Google Gemini
Think of AI personal assistants as newly employed delinquent apprentices who have turned up for work after a late night of partying. They will do the job, but not as well as an experienced, older worker. Their answers won’t be complete, and they will sometimes get it wrong. They are also more likely to steal from the workplace. However, they show promise for the future.
(N.B. None of the pictures in this post were AI-generated.)



