The human role in an automated framework
Mitev argues that the critical human contribution occurs before the first trade is executed. Analysts select the data used to train the algorithms, define variables, set parameters and run continuous diagnostics. He calls it “very dangerous” to override a properly calibrated model once it is live, describing trust in the output as a fundamental rule. When a model’s recommendations are vetoed, he said, subsequent market moves often reveal that the algorithm, not the person, was correct.
Despite this admonition, the firm maintains an active quality-control process. Engineers search for errors in incoming data, verify calculations and feed updated information into the models so they remain current. This vigilance is designed to mitigate well-documented risks such as overfitting, a condition that arises when an algorithm mistakes random market “noise” for meaningful signals. According to Mitev, rigorous design standards, back-testing and live simulations help prevent the system from generating unreliable forecasts.
A limited forecasting horizon
Mitev makes a clear distinction between short-term projections, which he believes algorithms handle effectively, and long-term market calls, which he views as inherently uncertain. He claims the SmartWealth framework can see roughly one month ahead with an acceptable level of confidence but considers one-year forecasts “not possible” in a probabilistic sense. By evaluating near-term data and responding automatically, the firm seeks to remove the emotion that often drives human trading decisions.
Market sentiment remains a powerful force even in the current wave of enthusiasm for generative AI. The European Central Bank has cautioned that much of the recent rally in technology shares may reflect fear of missing out rather than fundamental analysis. Mitev maintains that stripping out such emotions leads to better investment outcomes, provided the underlying data are accurate and the models are continuously monitored.

Imagem: Internet
Development in house, not off the shelf
Building and improving AI systems is an iterative process that can take years, Mitev noted. For that reason, SmartWealth develops all of its technology internally rather than relying on licensed software. He believes this approach allows the firm to differentiate its strategies and adjust quickly when markets evolve or new data sources become available.
The emphasis on proprietary research extends to model governance. SmartWealth’s teams test new algorithms in parallel with production systems before introducing any changes. Live environment testing, scenario analysis and stress tests form part of the release checklist. If results deviate from expectations, engineers diagnose whether the cause is data quality, model specification or an external market shock, and then refine the system accordingly.
Balancing trust and verification
Mitev acknowledges that artificial intelligence can err, sometimes “hallucinating” relationships that do not exist. He attributes most missteps to overfitting, inaccurate data or structural flaws in the model. Continuous validation and the injection of fresh data sets are intended to correct these issues without human traders second-guessing individual signals.
Ultimately, SmartWealth’s founder sees a symbiotic relationship between people and machines. Algorithms execute complex, rapid calculations free from emotion, yet humans architect the system, police its inputs and interpret its aggregate performance. As the IVAC fund seeks institutional capital and a 14%-15% return profile, that blend of automation and oversight remains central to the firm’s pitch.
Crédito da imagem: CNBC