When we integrated AI into Art Aura, the requirement was non-negotiable: everything runs on the device, nothing leaves the device, and it has to work without an internet connection. Apple's Foundation Models framework made this possible.
What Apple Intelligence actually is
Apple Intelligence is Apple's framework for running language models directly on the device hardware. On iPads and iPhones with M-series or A17 Pro chips, the Neural Engine — a dedicated processor with up to sixteen cores and 38 trillion operations per second on the M5 — handles the computation. The model weights are stored locally. The inference runs locally. No network request is made.
This is a fundamentally different architecture from cloud-based AI services like ChatGPT, Claude, or Gemini. Those services send your prompt to a remote server, process it there, and return the result. They are powerful, and for many applications they are the right choice. But they have inherent limitations for an app that handles sensitive business data: they require internet connectivity, they involve transmitting your data to a third party, and they typically charge per query or per token.
With Apple Intelligence, the language model is part of the operating system. Apps access it through the Foundation Models framework, the same way they access the camera or the accelerometer — as a system capability. There are no API keys, no usage limits, no per-query costs, and no terms of service to accept beyond Apple's standard developer agreement.
What Art Aura does with it
Art Aura uses Apple Intelligence for two distinct features: generating artwork descriptions and understanding natural language search queries. Both are designed as tools for professionals, not replacements for professional judgement.
Description Generator
Writing catalogue descriptions is one of the most time-consuming tasks in gallery administration. A proper entry requires a concise factual description for database use, a short interpretive text for exhibition labels, and sometimes a paragraph of art historical context for catalogues or press materials. For a gallery handling dozens of new works a month, this adds up.
Art Aura's Description Generator takes the factual information you have already entered — title, artist, year, medium, dimensions — and produces three texts:
Catalogue Description
Two to three sentences of factual, neutral prose suitable for a database entry or collection catalogue. Focuses on what the work is: medium, technique, subject, scale.
Exhibition Label
Thirty to fifty words of accessible interpretive text, the kind of thing you might see on a wall label in a museum or gallery. Written for a general audience.
Historical Context
A paragraph placing the work within the artist's practice and the broader art historical moment. Useful for press releases, catalogue essays, and client communications.
Each text is presented with a "Use This" button. You review it, edit if needed, and save. The typical generation time is under thirty seconds on M-series devices.
An important caveat: these are first drafts. The model draws on its training data, which is extensive but not infallible. It may get biographical details wrong or make connections that a specialist would not. The value is in the time saved, not in the replacement of expertise. A gallerist who knows their programme will spend two minutes refining a generated description instead of fifteen minutes writing one from scratch.
Smart Search
The second AI feature is less visible but arguably more useful in daily operation. Art Aura's Smart Search accepts natural language queries and translates them into structured filters against your collection.
When you type "available paintings under ten thousand," the AI parses this into a structured intent: status equals available, medium category equals painting, price less than 10,000. The app then filters your inventory accordingly. The same query expressed differently — "what paintings do we have for sale below 10k" — produces the same result.
The AI also handles medium expansion intelligently. If you search for "sculptures," it includes works in bronze, marble, stone, wood, ceramic, glass, and metal. A search for "paintings" includes oil, acrylic, watercolour, gouache, tempera, and fresco. This means you do not need to know the exact medium string stored in your database — you can think in categories, and the app understands.
Conversational refinement
On devices with Apple Intelligence, Smart Search supports multi-turn conversations. You can start with a broad query and narrow it progressively:
"Available paintings" → "Now only the ones over five thousand" → "Sort by price" → "Just the ones by Lindqvist"
The AI maintains context between turns, understanding that "the ones" refers to the results from the previous query. This is managed through a conversation session that tracks up to eight turns before automatically resetting. A "New Search" button is always available to start fresh.
Refinement chips — "Only available," "Sort by price," "Under €10k" — appear as quick-tap suggestions, combining the convenience of structured filters with the flexibility of natural language. After each query, the search field clears so you can type a follow-up without deleting your previous input.
Why this matters for art businesses
The art market has a particular relationship with confidentiality. It is one of the few industries where the names of your customers, the prices they paid, and the inventory they are considering are all considered genuinely sensitive information. A collector may not want it known that they are selling. An artist may not want their secondary market prices public. An advisor's client list is, in a very real sense, their business.
Cloud-based AI services, however well-intentioned their privacy policies, introduce a structural risk. Data must be transmitted, processed on remote hardware, and transmitted back. Even with encryption in transit and at rest, the data exists on infrastructure you do not control. For many art professionals, this is a non-starter.
On-device AI eliminates this concern entirely. When Art Aura generates a description or processes a search query, the computation happens on the Neural Engine inside your iPad. The data — artwork titles, artist names, prices, client names — never leaves the device. There is no network request, no server log, no analytics event. The AI is as private as a calculation performed on a calculator.
There are also practical benefits beyond privacy. On-device AI works at art fairs where Wi-Fi is unreliable. It works in storage facilities with no signal. It works on aeroplanes. There are no rate limits or outages to worry about. And because Apple bundles the AI capability with the device, there is no additional cost per query — you can generate a hundred descriptions in a day without any incremental expense.
The technical foundation
For those interested in the implementation: Art Aura uses Apple's Foundation Models framework, introduced with iOS 26 and macOS 26. The key abstraction is the @Generable struct — a Swift type that describes the shape of the output you want from the language model.
@Generable
struct GeneratedArtworkDescription: Sendable {
@Guide(description: "Professional 2-3 sentence catalog description")
let catalogDescription: String
@Guide(description: "Brief exhibition label (30-50 words)")
let exhibitionLabel: String
@Guide(description: "Art historical context")
let historicalContext: String
}
The @Guide annotations tell the model what kind of text to produce for each field. The framework handles the prompt construction, inference, and structured output parsing. The developer gets back a typed Swift struct with the generated text, not raw text that needs to be parsed.
For Smart Search, the pattern is similar. A @Generable struct describes the possible search parameters — entity type, status filter, medium, artist name, price range, sort order — and the model fills in whichever fields are relevant to the user's query. The rest of the search pipeline (filtering, sorting, medium expansion) is deterministic code, not AI. The model's only job is to translate natural language into a structured intent.
The entire AI layer is wrapped in an actor-based service (FoundationModelsService) that handles availability checking, session management, and thread safety. A language model session is created once and reused across queries, avoiding the overhead of repeated model loading. The session is cleaned up when the user leaves the search view.
Graceful degradation
Not every device supports Apple Intelligence. It requires an M1 chip or later (iPad Pro, iPad Air) or an A17 Pro or later (iPhone 15 Pro). On older devices, Art Aura works exactly as before — the AI features simply are not shown. Smart Search falls back to a keyword parser that handles the most common query patterns: artist names, medium types, price ranges, availability status. The results are slightly less flexible (you need to be more precise in your wording), but the core functionality is identical.
This is an important design principle. AI should be additive, not foundational. If you took away the AI layer entirely, Art Aura would still be a fully functional inventory management app. The AI makes certain tasks faster and more natural, but it is never the only way to accomplish them.
AI as tool, not oracle
There is a tendency in the current moment to treat AI as either revolutionary or dangerous, often both at once. In the context of a professional tool for the art market, I think a more measured view is appropriate.
The AI in Art Aura is a writing assistant and a query interpreter. It can produce a serviceable first draft of a catalogue description in seconds. It can understand that "expensive abstract paintings on canvas" means "filter by medium containing oil or acrylic, filter by price descending, filter by status available." These are genuinely useful capabilities that save time in daily operations.
But the AI does not know your artists. It does not understand the specific significance of a particular work within a particular practice. It cannot assess condition, or judge whether a price is appropriate for the current market, or sense whether a collector is genuinely interested. These remain human skills, and they are the skills that make a good gallerist or advisor valuable.
We built Art Aura's AI features with this distinction firmly in mind. The generated descriptions are presented as suggestions, not as finished text. The search results are starting points, not conclusions. The AI handles the mechanical parts — parsing, drafting, filtering — so that professionals can focus on the parts that require judgement, taste, and relationship.
On-device AI is still in its early days. The models will improve. The capabilities will expand. But the principle we have adopted — private, local, additive — is one we intend to maintain. The art world's data deserves the same care as the art itself.