Everything you need to know about how InferencePort handles your data and what you agree to by using it.
InferencePort AI does not require users to create an account. The application can be used without signing in. Signing in is optional and is only required if you choose to enable certain features such as chat syncing or server connecting.
By default, all chats are stored locally on your device. When you are not signed in, no chat data is transmitted to or stored on our servers.
If you sign in and explicitly enable chat syncing, your chat messages and conversation metadata are collected and stored securely to synchronize your chats across devices.
If you choose to create an account, we may collect:
Do not use personally identifying information in your username.
InferencePort AI does not use third-party analytics, advertising, or tracking services by default. The application may interact with:
Data transmitted to these services is always user-initiated. Some services may process data on servers outside the United States.
For security concerns, email [email protected] with subject line "InferencePort AI Security Disclosure".
For general questions, open a GitHub issue or email [email protected] with subject "InferencePort AI General Question". Reports are generally reviewed within 1–2 weeks.
These Terms of Service ("Terms") govern your use of InferencePort AI (the "Application"), developed by Rihaan Meher (the "Developer"). By installing, accessing, or using the Application, you agree to these Terms.
InferencePort AI is a desktop application that allows users to run locally installed generative AI models on their own devices. The Application does not require the Developer to operate servers for AI inference, and AI-generated content is produced entirely on the user's device (for local models).
The Developer does not control, monitor, review, filter, moderate, or store any AI-generated content produced by the Application, unless explicitly shared using report methods or the chat sync feature.
The Application enables the use of third-party and open-source AI models. These models may generate content that is inaccurate, misleading, offensive, or otherwise harmful.
You are solely responsible for how you interpret, use, share, or rely on any AI-generated content.
To the maximum extent permitted by law, the Developer shall not be liable for any direct, indirect, incidental, consequential, special, or punitive damages arising from AI-generated content, user prompts, model behavior, or third-party models. You assume all risks associated with use of the Application.
You agree that you are solely responsible for selecting and configuring AI models, ensuring compliance with applicable laws, evaluating the accuracy and legality of AI-generated content, and any actions taken based on AI outputs. The Application is intended for informational and experimental purposes only and must not be relied upon as professional, legal, medical, or financial advice.
The Application incorporates third-party and open-source components governed by their own licenses. The Application itself is licensed under the Apache License, Version 2.0. In the event of a conflict between these Terms and an open-source license, the open-source license shall govern for covered components.
If you encounter AI-generated content that is inappropriate or harmful, email [email protected] with subject line "InferencePort AI Inappropriate Content Disclosure". Reports without this subject line may not be reviewed.
Reports are reviewed within 1–2 weeks. Individual responses are not guaranteed. We reserve the right to ignore communications submitted in bad faith.
By using InferencePort AI, you acknowledge that you have read, understood, and agreed to these Terms. We reserve the right to change these terms at any time without further notice.