executive summary
While Israel's lighter regulatory touch on AI may initially benefit startups, Israeli companies focus on foreign markets. This necessitates eventual compliance with international standards.
Global AI regulation may become fragmented, but startups and their investors can avoid major compliance risks by adhering to established principles like those from the OECD.
Most AI startups use third-party AI technology. Thorough vetting of third-party AI vendors is crucial to mitigate potential risks related to IP claims and the use of problematic data sets.
Israel's AI policy environment is sectoral, prioritising existing regulators over a comprehensive framework. While government support exists, much of the investment comes from international and private sources.
article
Sample
You work primarily with founders and venture capital investors from Israel. Israel has a strong talent pool and ecosystem, but it doesn't have a large domestic market. How do AI startups compensate for this?
The journey begins in Israel, with talent and an idea. The company is incorporated in Israel, sometimes in the U.S. in parallel. When seeking customers, Israeli founders almost immediately look abroad, primarily to the U.S. and, to a lesser extent, Europe.
Once the company becomes too big to be ignored, the founders look for partners from larger markets to help defend and grow their market presence. The primary objective is eventually to exit or partner with an established U.S. or European strategic investor.
There is also an unwritten cap; once a startup’s valuation becomes too high, it might become too costly to be acquired. It is about getting this timing right.
When building their market presence, can Israeli startups benefit from lighter regulation on AI compared to their European counterparts, who have to deal with regulations such as the EU AI Act?
I believe any competitive advantage would be very limited. Since Israeli startups are so reliant on foreign market access, they principally have to go through the same compliance journey as European or, for that matter, U.S. startups.
Also, the EU GDPR and the EU AI Act have an extra-territorial scope.
So, in terms of competitiveness of European startups, what likely matters more is the environment in Europe as compared to Israel and the U.S., such as the availability of funding, talent, infrastructure, and customer demand.
It is important to be realistic: regulation is not exactly top of mind for founders. AI startups primarily need to develop a defensible proof of concept, use the best data available, win a couple of anchor customers, and avoid failure in the process.
Founders start to grapple with how their business model complies with regulations when they seek substantial investment beyond the seed stage, in particular from professional venture capitalists, or when they are being vetted by their first large customers.
What does the typical compliance journey for emerging companies look like?
It always starts with establishing basic internal policies and mapping relevant business processes.
When the EU GDPR was enacted, there was a significant global movement towards compliance. The first thing everybody did was to write a policy. It is the easiest thing to do. And if somebody asks, “Are you GDPR compliant?”, you can say, “Here, I have a packet of policies and they’re really well-written.”
But of course, paper alone does not get you very far. As the company grows and starts interacting with customers, establishing operating procedures, reporting channels, compliance by design tools, testing, and audits become crucial.
The evolution from basic policies to a comprehensive compliance and internal governance system should ideally mirror the company’s lifecycle. This includes the stages from a proof of concept to a fully operational entity engaging with international markets.
You emphasise the importance of vendor vetting for AI technology purchased from third parties. Why?
Most AI startups use third-party AI technology. Israeli startups prefer to move very quickly, trying and integrating various tools—including those from smaller vendors—to push forward as fast as possible. These smaller vendors might claim intellectual property rights over the outputs of their models, and there's also the risk of them using problematic data sets.
We advise all our clients to conduct thorough vetting early on to prevent significant issues later. Fix or remove any questionable integrations as soon as possible; that is our advice.
The EU GDPR has set a global standard for privacy regulation, yet global AI regulation remains fluid. How should startups and their investors navigate the varying and potentially inconsistent AI regulations worldwide?
I am not too concerned about fragmented regulation. It doesn’t necessarily require the "Brussels effect" to see a level of alignment that is sufficient for a global AI ecosystem to develop.
On a very practical level, what steps can early-stage founders take to ensure AI compliance?

First, understand who is using the AI and for what purpose. This helps establish what the focus should be on: data protection, non-bias, cybersecurity, stability or accuracy. Consider the company’s long-term intentions for using AI as well. For instance, if AI is initially implemented in a specific department like finance, we consider its potential expansion into other areas.
Second, look at the input: Does it include personal or company confidential data? Are the data sources reliable or does that need monitoring?
Third, consider any legal constraints. AI applications in sensitive areas like HR technology often fall into the high-risk category under regulations such as the EU AI Act, which may require additional work before they can go online.
Finally, evaluate the risk of harm. In areas where AI affects significant decisions, like in consumer credit scoring, it’s critical to ensure the system is robust and unbiased. Your compliance system should be watertight. Conversely, you might accept certain risks with internal tools, particularly if oversight mechanisms are in place and no critical decisions depend solely on the tool.
Are companies already thinking about adding new roles, such as AI officers, or changing their governance policies, e.g., by adding new board committees focused on AI?
As for AI-specific roles, they’re beginning to appear, particularly in Europe. But these roles are not yet common among early-stage or growth companies.
There may be a case for hiring dedicated AI regulatory specialists in companies deeply involved in AI system development. But for most early-stage and growth companies, particularly in Israel, this seems not to be a priority.
As a founder, I would set different priorities.
Most companies use third-party AI technology, so I believe it is more important to have skilled people in procurement, vendor vetting, and vendor onboarding to make sure that any third-party AI tool can be used with confidence. This is when red flags should be raised to prevent problems further down the line.
The EU rushed to legislate AI, even though actual AI use cases are still developing rapidly. Has that made the EU a less attractive market for Israeli companies?
I believe the situation is more nuanced than that.
For early-stage Israeli companies still proving concepts or forming initial partnerships, the EU’s stringent regulations, like those seen with GDPR, indeed could make it more difficult to access the market. Language and cultural diversity add to the complexity. Engaging with European customers often requires extensive contracts and paperwork. All of this makes alternative markets appear simpler and less demanding on resources.
However, there are limits to this argument: the European market is closer, and operations there can still be significantly cheaper than in, for example, the U.S. In addition, in some sectors, such as the medical sector, a local intermediary is often used for distribution and the provision of ancillary services, allowing startups to better manage the regulatory burden.
Furthermore, it’s not only the amount of paperwork that matters; liability risks can also drive costs up. Considering liability risks, it remains uncertain whether the EU is ultimately more costly to operate in compared to, for example, the U.S..
Looking at this from the perspective of investors, are certain markets more promising than others depending on their regulatory setup?
While a lightly regulated market might seem less complex due to fewer hurdles, it doesn’t always mean it’s a better investment destination.
Investors should look at the overall regulatory competence of a country, not just the current ease of doing business. Good regulation today indicates good AI regulation tomorrow.
That brings us to Israel. How conducive is Israel’s policy environment to AI investments?
The Israeli government provides some support that helps early-stage companies access global markets. However, much of the investment comes from international private investors, rather than direct government funding.

How does the current geopolitical climate affect Israel’s tech innovation landscape?
The job opportunities from large multi-national and the state mean you can fail without risking your livelihood. That is a key feature of the Israeli tech ecosystem.
Masha Yudashkin is an associate in the Privacy, Data Protection and Cyber Department at Barnea Jaffa Lande.
Related publications
Sources
- Israeli Ministry of Innovation, Science and Technology, 17 December 2023. Policy on Artificial Intelligence Regulation and Ethics. Retrieved from https://www.gov.il/en/pages/ai_2023
- Startup Genome, 2019,2020, 2021, 2022, 2023. The Global Startup Ecosystem Report. Retrieved from: https://startupgenome.com/, Last accessed 21 July 2024