125 Cambridgepark Drive Cambridge, MA 02140 +1 877.848.9903

Data Use Policy

IPM.ai Privacy Protection

Protecting Privacy has always been a challenge, one that many companies have all too often failed to take with the appropriate level of seriousness. When it comes to your health data the legal and ethical concerns are immensely magnified, and the level of commitment must be absolute and ineradicable. This is the very commitment we made when we founded IPM.ai. Not to meet the letter of the law, but to go beyond and create a systemic approach to privacy where the very architecture of the solution ensures that patient privacy could ever be compromised.

Traditional methods to determine whether de-identified health data is HIPAA compliant rely on an expert determination of the individual output dataset. Is the expert able to re-identify data down to one of a small group? Until now this approach has been considered best in class, it raises a number of potential issues:

  1. How skilled was the expert and how much effort did they expend?
  2. Was it equivalent to what a bad actor would apply?
  3. Was the data set recertified when additional data (Depth) was added?
  4. Was the data set recertified every time a new data element (Breath) was added?

As the data set is enhanced has sufficient care been taken to “fuzz” the data where necessary to prevent re-identification This approach, while the best there was to offer, creates a long list of potential failure points. Recertification is timing consuming and expensive, so how much new data is an amount that triggers the need for recertification? “Fuzzing” certain data, especially in cases like rare disease is often necessary to prevent re-identification. Yet this data fuzzing works at cross-purposes by weakening any data modeling. And of course the ultimate safety of the data depends on the expertise of the person attempting the re-identification. As techniques, computer power and rewards increase over time today’s privacy protections may not stand up to tomorrow’s privacy pirates.

When we founded IPM.ai we wanted to avoid all of these issues. We wanted to develop a fundamental architecture that prevented re-identification in all cases. We wanted to create a system where client data could easily be imported to improve modeling, without the need of re-certification or any risk or re-identification. And we wanted to build a system that would withstand the test of time and improvements in computing and AI technology. We are proud to say that all of these goals have been achieved with our patent pending privacy architecture. To our knowledge IPM.ai is the first company to have not just a data set HIPAA certified, but rather a fundamental architecture also certified. This allows for the generation of an infinite amount of analytics and model outputs, without having to be concerned that these outputs could be re-identified or are not in compliance with HIPAA and without the need to have the output data re-analyzed upon any changes to the input dataset. This provides not only the ultimate in AI modeling flexibility, and the greatest possible flexibility to ingest client/custom data, but most importantly the ultimate safeguards for security and privacy by the removal of all human and subjective elements within the process.

IPM.ai (“we,” “us,” and “our”) incorporates privacy principles (including a privacy-by-design HIPAA-compliant methodology) into our proprietary and patented modeling and analytics platform (“Platform”). The Platform receives, processes, and creates data that is not Protected Health Information (PHI) or personally identifiable information (PII). We do not distribute or sell PHI or PII. The Platform receives and analyzes data that is neither PHI nor PII to gain insights into aggregated and pseudonymous populations, and we share those insights with our affiliates and customers to improve marketing and analytics effectiveness.

What Data Do We Receive and Create?
The Platform processes health data that does not include PHI or PII. Additionally, with respect to the processing of health data, the reputable third parties with whom we work represent that the data the Platform receives does not contain PHI, including by providing independent attestations. In addition to health-related data, our Platform may utilize other data, including demographic and psychographic data. Data is processed using our Platform’s proprietary privacy-engineered artificial intelligence and machine learning technology. Our proprietary technology, systems and processes have been verified by an independent third party to assure HIPAA compliance and to validate that the data inputs and resulting derivatives do not include or consist of PHI.

How Do We Obtain Data?
We strive to work with vendors and partners who share our values. In particular, IPM.ai seeks data providers that are reputable in the industry, demonstrate compliance with privacy-friendly principles and applicable privacy law, including HIPAA, and honor consumer choices regarding marketing and advertising preferences. The Platform does not collect health information directly from patients.

How Do We Use and Disclose Data?
Our proprietary and patent-pending Platform uses privacy-engineered artificial intelligence and machine learning techniques to analyze data sets to derive insights about populations of aggregated, pseudonymous individuals. We may also use such data to improve the Platform and our products and services. We share the insights with our affiliates and customers in accordance with contractual and legal requirements so that they may inform marketing and analytics effectiveness. We may be required by applicable law to provide the data the Platform processes to legal authorities.
IPM.ai take privacy so seriously we built it into our system, and we invite you to learn more by contacting us at privacy@ipm.ai