mirror of
https://github.com/mendableai/firecrawl.git
synced 2024-11-16 11:42:24 +08:00
Update SDKs to MIT license
This commit is contained in:
parent
002bfdf639
commit
afb49e21e7
19
LICENSE
19
LICENSE
|
@ -659,3 +659,22 @@ specific requirements.
|
||||||
if any, to sign a "copyright disclaimer" for the program, if necessary.
|
if any, to sign a "copyright disclaimer" for the program, if necessary.
|
||||||
For more information on this, and how to apply and follow the GNU AGPL, see
|
For more information on this, and how to apply and follow the GNU AGPL, see
|
||||||
<https://www.gnu.org/licenses/>.
|
<https://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
Firecrawl - Web scraping and crawling tool
|
||||||
|
Copyright (c) 2024 Sideguide Technologies Inc.
|
||||||
|
|
||||||
|
This program is free software: you can redistribute it and/or modify
|
||||||
|
it under the terms of the GNU Affero General Public License as published
|
||||||
|
by the Free Software Foundation, either version 3 of the License, or
|
||||||
|
(at your option) any later version.
|
||||||
|
|
||||||
|
This program is distributed in the hope that it will be useful,
|
||||||
|
but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
GNU Affero General Public License for more details.
|
||||||
|
|
||||||
|
You should have received a copy of the GNU Affero General Public License
|
||||||
|
along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
For more information, please contact:
|
||||||
|
Sideguide Technologies Inc.
|
||||||
|
|
45
README.md
45
README.md
|
@ -21,7 +21,7 @@ We provide an easy to use API with our hosted version. You can find the playgrou
|
||||||
- [x] [Node SDK](https://github.com/mendableai/firecrawl/tree/main/apps/js-sdk)
|
- [x] [Node SDK](https://github.com/mendableai/firecrawl/tree/main/apps/js-sdk)
|
||||||
- [x] [Langchain Integration 🦜🔗](https://python.langchain.com/docs/integrations/document_loaders/firecrawl/)
|
- [x] [Langchain Integration 🦜🔗](https://python.langchain.com/docs/integrations/document_loaders/firecrawl/)
|
||||||
- [x] [Llama Index Integration 🦙](https://docs.llamaindex.ai/en/latest/examples/data_connectors/WebPageDemo/#using-firecrawl-reader)
|
- [x] [Llama Index Integration 🦙](https://docs.llamaindex.ai/en/latest/examples/data_connectors/WebPageDemo/#using-firecrawl-reader)
|
||||||
- [X] [Langchain JS Integration 🦜🔗](https://js.langchain.com/docs/integrations/document_loaders/web_loaders/firecrawl)
|
- [x] [Langchain JS Integration 🦜🔗](https://js.langchain.com/docs/integrations/document_loaders/web_loaders/firecrawl)
|
||||||
- [ ] Want an SDK or Integration? Let us know by opening an issue.
|
- [ ] Want an SDK or Integration? Let us know by opening an issue.
|
||||||
|
|
||||||
To run locally, refer to guide [here](https://github.com/mendableai/firecrawl/blob/main/CONTRIBUTING.md).
|
To run locally, refer to guide [here](https://github.com/mendableai/firecrawl/blob/main/CONTRIBUTING.md).
|
||||||
|
@ -212,7 +212,6 @@ curl -X POST https://api.firecrawl.dev/v0/scrape \
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
## Using Python SDK
|
## Using Python SDK
|
||||||
|
@ -302,34 +301,29 @@ To scrape a single URL with error handling, use the `scrapeUrl` method. It takes
|
||||||
|
|
||||||
```js
|
```js
|
||||||
try {
|
try {
|
||||||
const url = 'https://example.com';
|
const url = "https://example.com";
|
||||||
const scrapedData = await app.scrapeUrl(url);
|
const scrapedData = await app.scrapeUrl(url);
|
||||||
console.log(scrapedData);
|
console.log(scrapedData);
|
||||||
|
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
console.error(
|
console.error("Error occurred while scraping:", error.message);
|
||||||
'Error occurred while scraping:',
|
|
||||||
error.message
|
|
||||||
);
|
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
### Crawling a Website
|
### Crawling a Website
|
||||||
|
|
||||||
To crawl a website with error handling, use the `crawlUrl` method. It takes the starting URL and optional parameters as arguments. The `params` argument allows you to specify additional options for the crawl job, such as the maximum number of pages to crawl, allowed domains, and the output format.
|
To crawl a website with error handling, use the `crawlUrl` method. It takes the starting URL and optional parameters as arguments. The `params` argument allows you to specify additional options for the crawl job, such as the maximum number of pages to crawl, allowed domains, and the output format.
|
||||||
|
|
||||||
```js
|
```js
|
||||||
const crawlUrl = 'https://example.com';
|
const crawlUrl = "https://example.com";
|
||||||
const params = {
|
const params = {
|
||||||
crawlerOptions: {
|
crawlerOptions: {
|
||||||
excludes: ['blog/'],
|
excludes: ["blog/"],
|
||||||
includes: [], // leave empty for all pages
|
includes: [], // leave empty for all pages
|
||||||
limit: 1000,
|
limit: 1000,
|
||||||
},
|
},
|
||||||
pageOptions: {
|
pageOptions: {
|
||||||
onlyMainContent: true
|
onlyMainContent: true,
|
||||||
}
|
},
|
||||||
};
|
};
|
||||||
const waitUntilDone = true;
|
const waitUntilDone = true;
|
||||||
const timeout = 5;
|
const timeout = 5;
|
||||||
|
@ -339,10 +333,8 @@ const crawlResult = await app.crawlUrl(
|
||||||
waitUntilDone,
|
waitUntilDone,
|
||||||
timeout
|
timeout
|
||||||
);
|
);
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
### Checking Crawl Status
|
### Checking Crawl Status
|
||||||
|
|
||||||
To check the status of a crawl job with error handling, use the `checkCrawlStatus` method. It takes the job ID as a parameter and returns the current status of the crawl job.
|
To check the status of a crawl job with error handling, use the `checkCrawlStatus` method. It takes the job ID as a parameter and returns the current status of the crawl job.
|
||||||
|
@ -352,8 +344,6 @@ const status = await app.checkCrawlStatus(jobId);
|
||||||
console.log(status);
|
console.log(status);
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
### Extracting structured data from a URL
|
### Extracting structured data from a URL
|
||||||
|
|
||||||
With LLM extraction, you can easily extract structured data from any URL. We support zod schema to make it easier for you too. Here is how you to use it:
|
With LLM extraction, you can easily extract structured data from any URL. We support zod schema to make it easier for you too. Here is how you to use it:
|
||||||
|
@ -393,17 +383,28 @@ console.log(scrapeResult.data["llm_extraction"]);
|
||||||
With the `search` method, you can search for a query in a search engine and get the top results along with the page content for each result. The method takes the query as a parameter and returns the search results.
|
With the `search` method, you can search for a query in a search engine and get the top results along with the page content for each result. The method takes the query as a parameter and returns the search results.
|
||||||
|
|
||||||
```js
|
```js
|
||||||
const query = 'what is mendable?';
|
const query = "what is mendable?";
|
||||||
const searchResults = await app.search(query, {
|
const searchResults = await app.search(query, {
|
||||||
pageOptions: {
|
pageOptions: {
|
||||||
fetchPageContent: true // Fetch the page content for each search result
|
fetchPageContent: true, // Fetch the page content for each search result
|
||||||
}
|
},
|
||||||
});
|
});
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
## Contributing
|
## Contributing
|
||||||
|
|
||||||
We love contributions! Please read our [contributing guide](CONTRIBUTING.md) before submitting a pull request.
|
We love contributions! Please read our [contributing guide](CONTRIBUTING.md) before submitting a pull request.
|
||||||
|
|
||||||
*It is the sole responsibility of the end users to respect websites' policies when scraping, searching and crawling with Firecrawl. Users are advised to adhere to the applicable privacy policies and terms of use of the websites prior to initiating any scraping activities. By default, Firecrawl respects the directives specified in the websites' robots.txt files when crawling. By utilizing Firecrawl, you expressly agree to comply with these conditions.*
|
_It is the sole responsibility of the end users to respect websites' policies when scraping, searching and crawling with Firecrawl. Users are advised to adhere to the applicable privacy policies and terms of use of the websites prior to initiating any scraping activities. By default, Firecrawl respects the directives specified in the websites' robots.txt files when crawling. By utilizing Firecrawl, you expressly agree to comply with these conditions._
|
||||||
|
|
||||||
|
## License Disclaimer
|
||||||
|
|
||||||
|
This project is primarily licensed under the GNU Affero General Public License v3.0 (AGPL-3.0), as specified in the LICENSE file in the root directory of this repository. However, certain components of this project, specifically the SDKs located in the `/apps/js-sdk` and `/apps/python-sdk` directories, are licensed under the MIT License.
|
||||||
|
|
||||||
|
Please note:
|
||||||
|
|
||||||
|
- The AGPL-3.0 license applies to all parts of the project unless otherwise specified.
|
||||||
|
- The SDKs in `/apps/js-sdk` and `/apps/python-sdk` are licensed under the MIT License. Refer to the LICENSE files in these specific directories for details.
|
||||||
|
- When using or contributing to this project, ensure you comply with the appropriate license terms for the specific component you are working with.
|
||||||
|
|
||||||
|
For more details on the licensing of specific components, please refer to the LICENSE files in the respective directories or contact the project maintainers.
|
||||||
|
|
21
apps/js-sdk/LICENSE
Normal file
21
apps/js-sdk/LICENSE
Normal file
|
@ -0,0 +1,21 @@
|
||||||
|
MIT License
|
||||||
|
|
||||||
|
Copyright (c) 2024 Sideguide Technologies Inc.
|
||||||
|
|
||||||
|
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
of this software and associated documentation files (the "Software"), to deal
|
||||||
|
in the Software without restriction, including without limitation the rights
|
||||||
|
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
copies of the Software, and to permit persons to whom the Software is
|
||||||
|
furnished to do so, subject to the following conditions:
|
||||||
|
|
||||||
|
The above copyright notice and this permission notice shall be included in all
|
||||||
|
copies or substantial portions of the Software.
|
||||||
|
|
||||||
|
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
SOFTWARE.
|
151
apps/js-sdk/README.md
Normal file
151
apps/js-sdk/README.md
Normal file
|
@ -0,0 +1,151 @@
|
||||||
|
# Firecrawl Node SDK
|
||||||
|
|
||||||
|
The Firecrawl Node SDK is a library that allows you to easily scrape and crawl websites, and output the data in a format ready for use with language models (LLMs). It provides a simple and intuitive interface for interacting with the Firecrawl API.
|
||||||
|
|
||||||
|
## Installation
|
||||||
|
|
||||||
|
To install the Firecrawl Node SDK, you can use npm:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
npm install @mendable/firecrawl-js
|
||||||
|
```
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
1. Get an API key from [firecrawl.dev](https://firecrawl.dev)
|
||||||
|
2. Set the API key as an environment variable named `FIRECRAWL_API_KEY` or pass it as a parameter to the `FirecrawlApp` class.
|
||||||
|
|
||||||
|
Here's an example of how to use the SDK with error handling:
|
||||||
|
|
||||||
|
```js
|
||||||
|
import FirecrawlApp from "@mendable/firecrawl-js";
|
||||||
|
|
||||||
|
// Initialize the FirecrawlApp with your API key
|
||||||
|
const app = new FirecrawlApp({ apiKey: "YOUR_API_KEY" });
|
||||||
|
|
||||||
|
// Scrape a single URL
|
||||||
|
const url = "https://mendable.ai";
|
||||||
|
const scrapedData = await app.scrapeUrl(url);
|
||||||
|
|
||||||
|
// Crawl a website
|
||||||
|
const crawlUrl = "https://mendable.ai";
|
||||||
|
const params = {
|
||||||
|
crawlerOptions: {
|
||||||
|
excludes: ["blog/"],
|
||||||
|
includes: [], // leave empty for all pages
|
||||||
|
limit: 1000,
|
||||||
|
},
|
||||||
|
pageOptions: {
|
||||||
|
onlyMainContent: true,
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
const crawlResult = await app.crawlUrl(crawlUrl, params);
|
||||||
|
```
|
||||||
|
|
||||||
|
### Scraping a URL
|
||||||
|
|
||||||
|
To scrape a single URL with error handling, use the `scrapeUrl` method. It takes the URL as a parameter and returns the scraped data as a dictionary.
|
||||||
|
|
||||||
|
```js
|
||||||
|
const url = "https://example.com";
|
||||||
|
const scrapedData = await app.scrapeUrl(url);
|
||||||
|
```
|
||||||
|
|
||||||
|
### Crawling a Website
|
||||||
|
|
||||||
|
To crawl a website with error handling, use the `crawlUrl` method. It takes the starting URL and optional parameters as arguments. The `params` argument allows you to specify additional options for the crawl job, such as the maximum number of pages to crawl, allowed domains, and the output format.
|
||||||
|
|
||||||
|
```js
|
||||||
|
const crawlUrl = "https://example.com";
|
||||||
|
|
||||||
|
const params = {
|
||||||
|
crawlerOptions: {
|
||||||
|
excludes: ["blog/"],
|
||||||
|
includes: [], // leave empty for all pages
|
||||||
|
limit: 1000,
|
||||||
|
},
|
||||||
|
pageOptions: {
|
||||||
|
onlyMainContent: true,
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
const waitUntilDone = true;
|
||||||
|
const pollInterval = 5;
|
||||||
|
|
||||||
|
const crawlResult = await app.crawlUrl(
|
||||||
|
crawlUrl,
|
||||||
|
params,
|
||||||
|
waitUntilDone,
|
||||||
|
pollInterval
|
||||||
|
);
|
||||||
|
```
|
||||||
|
|
||||||
|
### Checking Crawl Status
|
||||||
|
|
||||||
|
To check the status of a crawl job with error handling, use the `checkCrawlStatus` method. It takes the job ID as a parameter and returns the current status of the crawl job.
|
||||||
|
|
||||||
|
```js
|
||||||
|
const status = await app.checkCrawlStatus(jobId);
|
||||||
|
```
|
||||||
|
|
||||||
|
### Extracting structured data from a URL
|
||||||
|
|
||||||
|
With LLM extraction, you can easily extract structured data from any URL. We support zod schema to make it easier for you too. Here is how you to use it:
|
||||||
|
|
||||||
|
```js
|
||||||
|
import FirecrawlApp from "@mendable/firecrawl-js";
|
||||||
|
import { z } from "zod";
|
||||||
|
|
||||||
|
const app = new FirecrawlApp({
|
||||||
|
apiKey: "fc-YOUR_API_KEY",
|
||||||
|
});
|
||||||
|
|
||||||
|
// Define schema to extract contents into
|
||||||
|
const schema = z.object({
|
||||||
|
top: z
|
||||||
|
.array(
|
||||||
|
z.object({
|
||||||
|
title: z.string(),
|
||||||
|
points: z.number(),
|
||||||
|
by: z.string(),
|
||||||
|
commentsURL: z.string(),
|
||||||
|
})
|
||||||
|
)
|
||||||
|
.length(5)
|
||||||
|
.describe("Top 5 stories on Hacker News"),
|
||||||
|
});
|
||||||
|
|
||||||
|
const scrapeResult = await app.scrapeUrl("https://firecrawl.dev", {
|
||||||
|
extractorOptions: { extractionSchema: schema },
|
||||||
|
});
|
||||||
|
|
||||||
|
console.log(scrapeResult.data["llm_extraction"]);
|
||||||
|
```
|
||||||
|
|
||||||
|
### Search for a query
|
||||||
|
|
||||||
|
With the `search` method, you can search for a query in a search engine and get the top results along with the page content for each result. The method takes the query as a parameter and returns the search results.
|
||||||
|
|
||||||
|
```js
|
||||||
|
const query = "what is mendable?";
|
||||||
|
const searchResults = await app.search(query, {
|
||||||
|
pageOptions: {
|
||||||
|
fetchPageContent: true, // Fetch the page content for each search result
|
||||||
|
},
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
## Error Handling
|
||||||
|
|
||||||
|
The SDK handles errors returned by the Firecrawl API and raises appropriate exceptions. If an error occurs during a request, an exception will be raised with a descriptive error message. The examples above demonstrate how to handle these errors using `try/catch` blocks.
|
||||||
|
|
||||||
|
## License
|
||||||
|
|
||||||
|
The Firecrawl Node SDK is licensed under the MIT License. This means you are free to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the SDK, subject to the following conditions:
|
||||||
|
|
||||||
|
- The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
|
||||||
|
|
||||||
|
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||||
|
|
||||||
|
Please note that while this SDK is MIT licensed, it is part of a larger project which may be under different licensing terms. Always refer to the license information in the root directory of the main project for overall licensing details.
|
21
apps/python-sdk/LICENSE
Normal file
21
apps/python-sdk/LICENSE
Normal file
|
@ -0,0 +1,21 @@
|
||||||
|
MIT License
|
||||||
|
|
||||||
|
Copyright (c) 2024 Sideguide Technologies Inc.
|
||||||
|
|
||||||
|
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
of this software and associated documentation files (the "Software"), to deal
|
||||||
|
in the Software without restriction, including without limitation the rights
|
||||||
|
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
copies of the Software, and to permit persons to whom the Software is
|
||||||
|
furnished to do so, subject to the following conditions:
|
||||||
|
|
||||||
|
The above copyright notice and this permission notice shall be included in all
|
||||||
|
copies or substantial portions of the Software.
|
||||||
|
|
||||||
|
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
SOFTWARE.
|
|
@ -15,7 +15,6 @@ pip install firecrawl-py
|
||||||
1. Get an API key from [firecrawl.dev](https://firecrawl.dev)
|
1. Get an API key from [firecrawl.dev](https://firecrawl.dev)
|
||||||
2. Set the API key as an environment variable named `FIRECRAWL_API_KEY` or pass it as a parameter to the `FirecrawlApp` class.
|
2. Set the API key as an environment variable named `FIRECRAWL_API_KEY` or pass it as a parameter to the `FirecrawlApp` class.
|
||||||
|
|
||||||
|
|
||||||
Here's an example of how to use the SDK:
|
Here's an example of how to use the SDK:
|
||||||
|
|
||||||
```python
|
```python
|
||||||
|
@ -46,6 +45,7 @@ To scrape a single URL, use the `scrape_url` method. It takes the URL as a param
|
||||||
url = 'https://example.com'
|
url = 'https://example.com'
|
||||||
scraped_data = app.scrape_url(url)
|
scraped_data = app.scrape_url(url)
|
||||||
```
|
```
|
||||||
|
|
||||||
### Extracting structured data from a URL
|
### Extracting structured data from a URL
|
||||||
|
|
||||||
With LLM extraction, you can easily extract structured data from any URL. We support pydantic schemas to make it easier for you too. Here is how you to use it:
|
With LLM extraction, you can easily extract structured data from any URL. We support pydantic schemas to make it easier for you too. Here is how you to use it:
|
||||||
|
@ -126,20 +126,27 @@ To ensure the functionality of the Firecrawl Python SDK, we have included end-to
|
||||||
To run the tests, execute the following commands:
|
To run the tests, execute the following commands:
|
||||||
|
|
||||||
Install pytest:
|
Install pytest:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
pip install pytest
|
pip install pytest
|
||||||
```
|
```
|
||||||
|
|
||||||
Run:
|
Run:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
pytest firecrawl/__tests__/e2e_withAuth/test.py
|
pytest firecrawl/__tests__/e2e_withAuth/test.py
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
## Contributing
|
## Contributing
|
||||||
|
|
||||||
Contributions to the Firecrawl Python SDK are welcome! If you find any issues or have suggestions for improvements, please open an issue or submit a pull request on the GitHub repository.
|
Contributions to the Firecrawl Python SDK are welcome! If you find any issues or have suggestions for improvements, please open an issue or submit a pull request on the GitHub repository.
|
||||||
|
|
||||||
## License
|
## License
|
||||||
|
|
||||||
The Firecrawl Python SDK is open-source and released under the [AGPL License](https://www.gnu.org/licenses/agpl-3.0.en.html).
|
The Firecrawl Python SDK is licensed under the MIT License. This means you are free to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the SDK, subject to the following conditions:
|
||||||
|
|
||||||
|
- The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
|
||||||
|
|
||||||
|
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||||
|
|
||||||
|
Please note that while this SDK is MIT licensed, it is part of a larger project which may be under different licensing terms. Always refer to the license information in the root directory of the main project for overall licensing details.
|
||||||
|
|
Loading…
Reference in New Issue
Block a user