diff --git a/README.md b/README.md index 415f082..3280e01 100644 --- a/README.md +++ b/README.md @@ -29,6 +29,7 @@ The ioBroker AI Assistant Adapter runs a smart assistant in your ioBroker system - **OpenAI**: [openai.com](https://openai.com) - **Perplexity**: [perplexity.ai](https://perplexity.ai) - **OpenRouter**: [openrouter.ai](https://openrouter.ai) +- **Deepseek**: [deepseek.com](http://deepseek.com/) - **Custom/Self-hosted Models** (e.g., LM Studio, LocalAI) --- @@ -224,6 +225,13 @@ Set the log level to `debug` in the ioBroker admin interface for detailed logs. Placeholder for the next version (at the beginning of the line): ### **WORK IN PROGRESS** --> +### 0.1.3 (2025-29-01) +* (@ToGe3688) Added support for Deepseek as api provider +* (@ToGe3688) Better display of providers in model selection for admin config +* (@ToGe3688) Fixed object hirarchy +* (@ToGe3688) Fixed state roles +* (@ToGe3688) Fixed onStateChange handler + ### 0.1.2 (2025-12-01) - (@ToGe3688) Better error handling for Provider APIs - (@ToGe3688) Anthropic API Versioning diff --git a/admin/i18n/de/translations.json b/admin/i18n/de/translations.json index 6d0ea00..e298a03 100644 --- a/admin/i18n/de/translations.json +++ b/admin/i18n/de/translations.json @@ -1,67 +1,68 @@ { + "A descriptive name for your function": "Ein beschreibender Name für Ihre Funktion", + "API Token": "API-Token", + "API Token for Inference Server": "API-Token für Inference-Server", + "Active": "Aktiv", "Assistant Settings": "Assistenten-Einstellungen", - "Give your personal assistant a name and describe its personality. Choose a model that should be used for your assistant.": "Geben Sie Ihrem persönlichen Assistenten einen Namen und beschreiben Sie seine Persönlichkeit. Wählen Sie ein Modell aus, das für Ihren Assistenten verwendet werden soll.", - "Name": "Name", - "Name for the Assistant": "Name für den Assistenten", - "Model": "Modell", - "Which Model should be used": "Welches Modell soll verwendet werden", - "Personality": "Persönlichkeit", + "Assistant can use Object": "Assistent kann Objekt verwenden", + "Assistant can use this function": "Assistent kann diese Funktion verwenden", + "Custom functions for assistant": "Benutzerdefinierte Funktionen für Assistenten", + "Datapoint (Request)": "Datenpunkt (Anfrage)", + "Datapoint (Result)": "Datenpunkt (Ergebnis)", + "Debug / Chain-of-Thought Output": "Debug / Gedankengang-Ausgabe", + "Define custom functions for the assistant. Make sure to add a good description for your functions so the assistant knows when to call your function. Each function needs a datapoint that starts the process and another datapoint that contains the result of your function.": "Definieren Sie benutzerdefinierte Funktionen für den Assistenten. Fügen Sie eine gute Beschreibung für Ihre Funktionen hinzu, damit der Assistent weiß, wann er Ihre Funktion aufrufen soll. Jede Funktion benötigt einen Datenpunkt, der den Prozess startet, und einen weiteren Datenpunkt, der das Ergebnis Ihrer Funktion enthält.", "Describe the personality of your assistant": "Beschreiben Sie die Persönlichkeit Ihres Assistenten", - "Friendly and helpful": "Freundlich und hilfsbereit", - "Language": "Sprache", - "Select the language that should be used by the assistant": "Wählen Sie die Sprache aus, die der Assistent verwenden soll", + "Describe what your function does and how the data for the request should look. This is important for the assistant to understand your function.": "Beschreiben Sie, was Ihre Funktion macht und wie die Daten für die Anfrage aussehen sollen. Dies ist wichtig, damit der Assistent Ihre Funktion versteht.", + "Description": "Beschreibung", + "Do you really want to import objects from enum.rooms? Existing objects will be reset!": "Möchten Sie wirklich Objekte aus enum.rooms importieren? Bestehende Objekte werden zurückgesetzt!", + "ERROR: column 'Model' must contain unique text": "FEHLER: Spalte 'Modell' muss eindeutigen Text enthalten", "English": "Englisch", + "Friendly and helpful": "Freundlich und hilfsbereit", + "Functions": "Funktionen", "German": "Deutsch", - "Debug / Chain-of-Thought Output": "Debug / Gedankengang-Ausgabe", - "When activated the internal thought process of the assistant will be written to the response datapoint": "Wenn aktiviert, wird der interne Gedankenprozess des Assistenten in den Antwort-Datenpunkt geschrieben", - "Model Settings": "Modell-Einstellungen", - "Select how many messages should be included for context retention. Temperature defines creativity/randomness of output from 0-1 where 0 is the most predictable output. Set how many tokens should be generated max for assistant responses.": "Wählen Sie aus, wie viele Nachrichten für die Kontextbeibehaltung einbezogen werden sollen. Temperature definiert die Kreativität/Zufälligkeit der Ausgabe von 0-1, wobei 0 die vorhersehbarste Ausgabe ist. Legen Sie fest, wie viele Tokens maximal für Assistenten-Antworten generiert werden sollen.", - "Message History (Chat Mode)": "Nachrichtenverlauf (Chat-Modus)", + "Give your personal assistant a name and describe its personality. Choose a model that should be used for your assistant.": "Geben Sie Ihrem persönlichen Assistenten einen Namen und beschreiben Sie seine Persönlichkeit. Wählen Sie ein Modell aus, das für Ihren Assistenten verwendet werden soll.", + "How long to wait between retries": "Wie lange zwischen Wiederholungen gewartet werden soll", + "How many times should we retry if request to model fails": "Wie oft soll bei fehlgeschlagener Modellanfrage wiederholt werden", "If greater 0 previous messages will be included in the request so the tool will stay in context": "Bei Werten größer 0 werden vorherige Nachrichten in die Anfrage einbezogen, damit das Tool im Kontext bleibt", - "Temperature": "Temperature", - "Setting for creativity/consistency of the models response. (Leave at default if you are not sure!=": "Einstellung für Kreativität/Konsistenz der Modellantwort. (Bei Unsicherheit auf Standardwert belassen!=", - "Max. Tokens": "Max. Tokens", + "Import objects from enum.rooms": "Objekte aus enum.rooms importieren", + "Language": "Sprache", "Limit the response of the tool to your desired amount of tokens.": "Begrenzen Sie die Antwort des Tools auf die gewünschte Anzahl von Tokens.", - "Request Settings": "Anfrage-Einstellungen", - "Select if failed requests to the assistant should be retried and how long to wait between tries.": "Wählen Sie aus, ob fehlgeschlagene Anfragen an den Assistenten wiederholt werden sollen und wie lange zwischen den Versuchen gewartet werden soll.", + "Link to LM Studio": "Link zu LM Studio", + "Link to LocalAI": "Link zu LocalAI", "Max. Retries": "Max. Wiederholungen", - "How many times should we retry if request to model fails": "Wie oft soll bei fehlgeschlagener Modellanfrage wiederholt werden", - "Retry Delay": "Wiederholungsverzögerung", - "How long to wait between retries": "Wie lange zwischen Wiederholungen gewartet werden soll", + "Max. Tokens": "Max. Tokens", + "Message History (Chat Mode)": "Nachrichtenverlauf (Chat-Modus)", + "Model": "Modell", + "Model Settings": "Modell-Einstellungen", + "Model is active": "Modell ist aktiv", + "Models": "Modelle", + "Name": "Name", + "Name for the Assistant": "Name für den Assistenten", + "Name of the Model": "Name des Modells", + "Object": "Objekt", "Object access for assistant": "Objektzugriff für Assistenten", - "Please add the objects you want to use with the assistant. The assistant will be able to read and control these objects. You can use the button to import all states from your configured room sorting. Make sure to only include needed states to save tokens.": "Bitte fügen Sie die Objekte hinzu, die Sie mit dem Assistenten verwenden möchten. Der Assistent wird diese Objekte lesen und steuern können. Sie können den Button verwenden, um alle Zustände aus Ihrer konfigurierten Raumsortierung zu importieren. Stellen Sie sicher, dass Sie nur benötigte Zustände einbeziehen, um Tokens zu sparen.", - "Import objects from enum.rooms": "Objekte aus enum.rooms importieren", - "Do you really want to import objects from enum.rooms? Existing objects will be reset!": "Möchten Sie wirklich Objekte aus enum.rooms importieren? Bestehende Objekte werden zurückgesetzt!", "Objects": "Objekte", - "Active": "Aktiv", - "Assistant can use Object": "Assistent kann Objekt verwenden", - "Sort": "Sortierung", - "Room or sorting for Object": "Raum oder Sortierung für Objekt", - "Object": "Objekt", - "Custom functions for assistant": "Benutzerdefinierte Funktionen für Assistenten", - "Define custom functions for the assistant. Make sure to add a good description for your functions so the assistant knows when to call your function. Each function needs a datapoint that starts the process and another datapoint that contains the result of your function.": "Definieren Sie benutzerdefinierte Funktionen für den Assistenten. Fügen Sie eine gute Beschreibung für Ihre Funktionen hinzu, damit der Assistent weiß, wann er Ihre Funktion aufrufen soll. Jede Funktion benötigt einen Datenpunkt, der den Prozess startet, und einen weiteren Datenpunkt, der das Ergebnis Ihrer Funktion enthält.", - "Functions": "Funktionen", - "Assistant can use this function": "Assistent kann diese Funktion verwenden", - "A descriptive name for your function": "Ein beschreibender Name für Ihre Funktion", - "Description": "Beschreibung", - "Describe what your function does and how the data for the request should look. This is important for the assistant to understand your function.": "Beschreiben Sie, was Ihre Funktion macht und wie die Daten für die Anfrage aussehen sollen. Dies ist wichtig, damit der Assistent Ihre Funktion versteht.", - "Datapoint (Request)": "Datenpunkt (Anfrage)", - "The datapoint that starts the request for the function": "Der Datenpunkt, der die Anfrage für die Funktion startet", - "Datapoint (Result)": "Datenpunkt (Ergebnis)", - "The datapoint that contains the result of your function call (Has to be fulfilled in 60 Seconds!)": "Der Datenpunkt, der das Ergebnis Ihres Funktionsaufrufs enthält (Muss innerhalb von 60 Sekunden erfüllt werden!)", + "Personality": "Persönlichkeit", + "Please add the objects you want to use with the assistant. The assistant will be able to read and control these objects. You can use the button to import all states from your configured room sorting. Make sure to only include needed states to save tokens.": "Bitte fügen Sie die Objekte hinzu, die Sie mit dem Assistenten verwenden möchten. Der Assistent wird diese Objekte lesen und steuern können. Sie können den Button verwenden, um alle Zustände aus Ihrer konfigurierten Raumsortierung zu importieren. Stellen Sie sicher, dass Sie nur benötigte Zustände einbeziehen, um Tokens zu sparen.", "Please enter your Anthropic API Token to start using models like Opus, Haiku and Sonnet. If there are new models released you can simply add them in the table to start using them with ai assistants.": "Bitte geben Sie Ihren Anthropic API-Token ein, um Modelle wie Opus, Haiku und Sonnet zu nutzen. Wenn neue Modelle veröffentlicht werden, können Sie diese einfach in der Tabelle hinzufügen, um sie mit KI-Assistenten zu verwenden.", - "Settings": "Einstellungen", - "API Token": "API-Token", - "ERROR: column 'Model' must contain unique text": "FEHLER: Spalte 'Modell' muss eindeutigen Text enthalten", - "Models": "Modelle", - "Model is active": "Modell ist aktiv", - "Name of the Model": "Name des Modells", + "Please enter your Deepseek API Token to start using the models. If there are new models released you can simply add them in the table to start using them with ai assistants.": "Bitte geben Sie Ihr Deepseek API-Token ein, um die Modelle zu verwenden. Wenn neue Modelle veröffentlicht werden, können Sie sie einfach in die Tabelle hinzufügen, um sie mit KI-Assistenten zu verwenden.", "Please enter your OpenAI API Token to start using models like Gpt4, Gpt4-o1, Gpt3-5. If there are new models released you can simply add them in the table to start using them with ai assistants.": "Bitte geben Sie Ihren OpenAI API-Token ein, um Modelle wie Gpt4, Gpt4-o1, Gpt3-5 zu nutzen. Wenn neue Modelle veröffentlicht werden, können Sie diese einfach in der Tabelle hinzufügen, um sie mit KI-Assistenten zu verwenden.", - "Please enter your Perplexity API Token to start using the models. If there are new models released you can simply add them in the table to start using them with ai assistants.": "Bitte geben Sie Ihren Perplexity API-Token ein, um die Modelle zu nutzen. Wenn neue Modelle veröffentlicht werden, können Sie diese einfach in der Tabelle hinzufügen, um sie mit KI-Assistenten zu verwenden.", "Please enter your Openrouter API Token to start using the models. If there are new models released you can simply add them in the table to start using them with ai assistants.": "Bitte geben Sie Ihren Openrouter API-Token ein, um die Modelle zu nutzen. Wenn neue Modelle veröffentlicht werden, können Sie diese einfach in der Tabelle hinzufügen, um sie mit KI-Assistenten zu verwenden.", - "You can use your custom or self hosted inference server to run open source models. The server needs to follow the rest api standards used by many providers, see examples below. Please make sure to add your used models by name to the table below.": "Sie können Ihren benutzerdefinierten oder selbst gehosteten Inference-Server verwenden, um Open-Source-Modelle auszuführen. Der Server muss den REST-API-Standards folgen, die von vielen Anbietern verwendet werden, siehe Beispiele unten. Bitte stellen Sie sicher, dass Sie Ihre verwendeten Modelle mit Namen in die untenstehende Tabelle einfügen.", - "Link to LM Studio": "Link zu LM Studio", - "Link to LocalAI": "Link zu LocalAI", + "Please enter your Perplexity API Token to start using the models. If there are new models released you can simply add them in the table to start using them with ai assistants.": "Bitte geben Sie Ihren Perplexity API-Token ein, um die Modelle zu nutzen. Wenn neue Modelle veröffentlicht werden, können Sie diese einfach in der Tabelle hinzufügen, um sie mit KI-Assistenten zu verwenden.", + "Request Settings": "Anfrage-Einstellungen", + "Retry Delay": "Wiederholungsverzögerung", + "Room or sorting for Object": "Raum oder Sortierung für Objekt", + "Select how many messages should be included for context retention. Temperature defines creativity/randomness of output from 0-1 where 0 is the most predictable output. Set how many tokens should be generated max for assistant responses.": "Wählen Sie aus, wie viele Nachrichten für die Kontextbeibehaltung einbezogen werden sollen. Temperature definiert die Kreativität/Zufälligkeit der Ausgabe von 0-1, wobei 0 die vorhersehbarste Ausgabe ist. Legen Sie fest, wie viele Tokens maximal für Assistenten-Antworten generiert werden sollen.", + "Select if failed requests to the assistant should be retried and how long to wait between tries.": "Wählen Sie aus, ob fehlgeschlagene Anfragen an den Assistenten wiederholt werden sollen und wie lange zwischen den Versuchen gewartet werden soll.", + "Select the language that should be used by the assistant": "Wählen Sie die Sprache aus, die der Assistent verwenden soll", + "Setting for creativity/consistency of the models response. (Leave at default if you are not sure!=": "Einstellung für Kreativität/Konsistenz der Modellantwort. (Bei Unsicherheit auf Standardwert belassen!=", + "Settings": "Einstellungen", + "Sort": "Sortierung", + "Temperature": "Temperature", + "The datapoint that contains the result of your function call (Has to be fulfilled in 60 Seconds!)": "Der Datenpunkt, der das Ergebnis Ihres Funktionsaufrufs enthält (Muss innerhalb von 60 Sekunden erfüllt werden!)", + "The datapoint that starts the request for the function": "Der Datenpunkt, der die Anfrage für die Funktion startet", "URL for Inference Server": "URL für Inference-Server", - "API Token for Inference Server": "API-Token für Inference-Server" + "When activated the internal thought process of the assistant will be written to the response datapoint": "Wenn aktiviert, wird der interne Gedankenprozess des Assistenten in den Antwort-Datenpunkt geschrieben", + "Which Model should be used": "Welches Modell soll verwendet werden", + "You can use your custom or self hosted inference server to run open source models. The server needs to follow the rest api standards used by many providers, see examples below. Please make sure to add your used models by name to the table below.": "Sie können Ihren benutzerdefinierten oder selbst gehosteten Inference-Server verwenden, um Open-Source-Modelle auszuführen. Der Server muss den REST-API-Standards folgen, die von vielen Anbietern verwendet werden, siehe Beispiele unten. Bitte stellen Sie sicher, dass Sie Ihre verwendeten Modelle mit Namen in die untenstehende Tabelle einfügen." } diff --git a/admin/i18n/en/translations.json b/admin/i18n/en/translations.json index 39649c2..600eea6 100644 --- a/admin/i18n/en/translations.json +++ b/admin/i18n/en/translations.json @@ -1,67 +1,68 @@ { + "A descriptive name for your function": "A descriptive name for your function", + "API Token": "API Token", + "API Token for Inference Server": "API Token for Inference Server", + "Active": "Active", "Assistant Settings": "Assistant Settings", - "Give your personal assistant a name and describe its personality. Choose a model that should be used for your assistant.": "Give your personal assistant a name and describe its personality. Choose a model that should be used for your assistant.", - "Name": "Name", - "Name for the Assistant": "Name for the Assistant", - "Model": "Model", - "Which Model should be used": "Which Model should be used", - "Personality": "Personality", + "Assistant can use Object": "Assistant can use Object", + "Assistant can use this function": "Assistant can use this function", + "Custom functions for assistant": "Custom functions for assistant", + "Datapoint (Request)": "Datapoint (Request)", + "Datapoint (Result)": "Datapoint (Result)", + "Debug / Chain-of-Thought Output": "Debug / Chain-of-Thought Output", + "Define custom functions for the assistant. Make sure to add a good description for your functions so the assistant knows when to call your function. Each function needs a datapoint that starts the process and another datapoint that contains the result of your function.": "Define custom functions for the assistant. Make sure to add a good description for your functions so the assistant knows when to call your function. Each function needs a datapoint that starts the process and another datapoint that contains the result of your function.", "Describe the personality of your assistant": "Describe the personality of your assistant", - "Friendly and helpful": "Friendly and helpful", - "Language": "Language", - "Select the language that should be used by the assistant": "Select the language that should be used by the assistant", + "Describe what your function does and how the data for the request should look. This is important for the assistant to understand your function.": "Describe what your function does and how the data for the request should look. This is important for the assistant to understand your function.", + "Description": "Description", + "Do you really want to import objects from enum.rooms? Existing objects will be reset!": "Do you really want to import objects from enum.rooms? Existing objects will be reset!", + "ERROR: column 'Model' must contain unique text": "ERROR: column 'Model' must contain unique text", "English": "English", + "Friendly and helpful": "Friendly and helpful", + "Functions": "Functions", "German": "German", - "Debug / Chain-of-Thought Output": "Debug / Chain-of-Thought Output", - "When activated the internal thought process of the assistant will be written to the response datapoint": "When activated the internal thought process of the assistant will be written to the response datapoint", - "Model Settings": "Model Settings", - "Select how many messages should be included for context retention. Temperature defines creativity/randomness of output from 0-1 where 0 is the most predictable output. Set how many tokens should be generated max for assistant responses.": "Select how many messages should be included for context retention. Temperature defines creativity/randomness of output from 0-1 where 0 is the most predictable output. Set how many tokens should be generated max for assistant responses.", - "Message History (Chat Mode)": "Message History (Chat Mode)", + "Give your personal assistant a name and describe its personality. Choose a model that should be used for your assistant.": "Give your personal assistant a name and describe its personality. Choose a model that should be used for your assistant.", + "How long to wait between retries": "How long to wait between retries", + "How many times should we retry if request to model fails": "How many times should we retry if request to model fails", "If greater 0 previous messages will be included in the request so the tool will stay in context": "If greater 0 previous messages will be included in the request so the tool will stay in context", - "Temperature": "Temperature", - "Setting for creativity/consistency of the models response. (Leave at default if you are not sure!=": "Setting for creativity/consistency of the models response. (Leave at default if you are not sure!=", - "Max. Tokens": "Max. Tokens", + "Import objects from enum.rooms": "Import objects from enum.rooms", + "Language": "Language", "Limit the response of the tool to your desired amount of tokens.": "Limit the response of the tool to your desired amount of tokens.", - "Request Settings": "Request Settings", - "Select if failed requests to the assistant should be retried and how long to wait between tries.": "Select if failed requests to the assistant should be retried and how long to wait between tries.", + "Link to LM Studio": "Link to LM Studio", + "Link to LocalAI": "Link to LocalAI", "Max. Retries": "Max. Retries", - "How many times should we retry if request to model fails": "How many times should we retry if request to model fails", - "Retry Delay": "Retry Delay", - "How long to wait between retries": "How long to wait between retries", + "Max. Tokens": "Max. Tokens", + "Message History (Chat Mode)": "Message History (Chat Mode)", + "Model": "Model", + "Model Settings": "Model Settings", + "Model is active": "Model is active", + "Models": "Models", + "Name": "Name", + "Name for the Assistant": "Name for the Assistant", + "Name of the Model": "Name of the Model", + "Object": "Object", "Object access for assistant": "Object access for assistant", - "Please add the objects you want to use with the assistant. The assistant will be able to read and control these objects. You can use the button to import all states from your configured room sorting. Make sure to only include needed states to save tokens.": "Please add the objects you want to use with the assistant. The assistant will be able to read and control these objects. You can use the button to import all states from your configured room sorting. Make sure to only include needed states to save tokens.", - "Import objects from enum.rooms": "Import objects from enum.rooms", - "Do you really want to import objects from enum.rooms? Existing objects will be reset!": "Do you really want to import objects from enum.rooms? Existing objects will be reset!", "Objects": "Objects", - "Active": "Active", - "Assistant can use Object": "Assistant can use Object", - "Sort": "Sort", - "Room or sorting for Object": "Room or sorting for Object", - "Object": "Object", - "Custom functions for assistant": "Custom functions for assistant", - "Define custom functions for the assistant. Make sure to add a good description for your functions so the assistant knows when to call your function. Each function needs a datapoint that starts the process and another datapoint that contains the result of your function.": "Define custom functions for the assistant. Make sure to add a good description for your functions so the assistant knows when to call your function. Each function needs a datapoint that starts the process and another datapoint that contains the result of your function.", - "Functions": "Functions", - "Assistant can use this function": "Assistant can use this function", - "A descriptive name for your function": "A descriptive name for your function", - "Description": "Description", - "Describe what your function does and how the data for the request should look. This is important for the assistant to understand your function.": "Describe what your function does and how the data for the request should look. This is important for the assistant to understand your function.", - "Datapoint (Request)": "Datapoint (Request)", - "The datapoint that starts the request for the function": "The datapoint that starts the request for the function", - "Datapoint (Result)": "Datapoint (Result)", - "The datapoint that contains the result of your function call (Has to be fulfilled in 60 Seconds!)": "The datapoint that contains the result of your function call (Has to be fulfilled in 60 Seconds!)", + "Personality": "Personality", + "Please add the objects you want to use with the assistant. The assistant will be able to read and control these objects. You can use the button to import all states from your configured room sorting. Make sure to only include needed states to save tokens.": "Please add the objects you want to use with the assistant. The assistant will be able to read and control these objects. You can use the button to import all states from your configured room sorting. Make sure to only include needed states to save tokens.", "Please enter your Anthropic API Token to start using models like Opus, Haiku and Sonnet. If there are new models released you can simply add them in the table to start using them with ai assistants.": "Please enter your Anthropic API Token to start using models like Opus, Haiku and Sonnet. If there are new models released you can simply add them in the table to start using them with ai assistants.", - "Settings": "Settings", - "API Token": "API Token", - "ERROR: column 'Model' must contain unique text": "ERROR: column 'Model' must contain unique text", - "Models": "Models", - "Model is active": "Model is active", - "Name of the Model": "Name of the Model", + "Please enter your Deepseek API Token to start using the models. If there are new models released you can simply add them in the table to start using them with ai assistants.": "Please enter your Deepseek API Token to start using the models. If there are new models released you can simply add them in the table to start using them with ai assistants.", "Please enter your OpenAI API Token to start using models like Gpt4, Gpt4-o1, Gpt3-5. If there are new models released you can simply add them in the table to start using them with ai assistants.": "Please enter your OpenAI API Token to start using models like Gpt4, Gpt4-o1, Gpt3-5. If there are new models released you can simply add them in the table to start using them with ai assistants.", - "Please enter your Perplexity API Token to start using the models. If there are new models released you can simply add them in the table to start using them with ai assistants.": "Please enter your Perplexity API Token to start using the models. If there are new models released you can simply add them in the table to start using them with ai assistants.", "Please enter your Openrouter API Token to start using the models. If there are new models released you can simply add them in the table to start using them with ai assistants.": "Please enter your Openrouter API Token to start using the models. If there are new models released you can simply add them in the table to start using them with ai assistants.", - "You can use your custom or self hosted inference server to run open source models. The server needs to follow the rest api standards used by many providers, see examples below. Please make sure to add your used models by name to the table below.": "You can use your custom or self hosted inference server to run open source models. The server needs to follow the rest api standards used by many providers, see examples below. Please make sure to add your used models by name to the table below.", - "Link to LM Studio": "Link to LM Studio", - "Link to LocalAI": "Link to LocalAI", + "Please enter your Perplexity API Token to start using the models. If there are new models released you can simply add them in the table to start using them with ai assistants.": "Please enter your Perplexity API Token to start using the models. If there are new models released you can simply add them in the table to start using them with ai assistants.", + "Request Settings": "Request Settings", + "Retry Delay": "Retry Delay", + "Room or sorting for Object": "Room or sorting for Object", + "Select how many messages should be included for context retention. Temperature defines creativity/randomness of output from 0-1 where 0 is the most predictable output. Set how many tokens should be generated max for assistant responses.": "Select how many messages should be included for context retention. Temperature defines creativity/randomness of output from 0-1 where 0 is the most predictable output. Set how many tokens should be generated max for assistant responses.", + "Select if failed requests to the assistant should be retried and how long to wait between tries.": "Select if failed requests to the assistant should be retried and how long to wait between tries.", + "Select the language that should be used by the assistant": "Select the language that should be used by the assistant", + "Setting for creativity/consistency of the models response. (Leave at default if you are not sure!=": "Setting for creativity/consistency of the models response. (Leave at default if you are not sure!=", + "Settings": "Settings", + "Sort": "Sort", + "Temperature": "Temperature", + "The datapoint that contains the result of your function call (Has to be fulfilled in 60 Seconds!)": "The datapoint that contains the result of your function call (Has to be fulfilled in 60 Seconds!)", + "The datapoint that starts the request for the function": "The datapoint that starts the request for the function", "URL for Inference Server": "URL for Inference Server", - "API Token for Inference Server": "API Token for Inference Server" + "When activated the internal thought process of the assistant will be written to the response datapoint": "When activated the internal thought process of the assistant will be written to the response datapoint", + "Which Model should be used": "Which Model should be used", + "You can use your custom or self hosted inference server to run open source models. The server needs to follow the rest api standards used by many providers, see examples below. Please make sure to add your used models by name to the table below.": "You can use your custom or self hosted inference server to run open source models. The server needs to follow the rest api standards used by many providers, see examples below. Please make sure to add your used models by name to the table below." } diff --git a/admin/i18n/es/translations.json b/admin/i18n/es/translations.json index 3632c57..b63635a 100644 --- a/admin/i18n/es/translations.json +++ b/admin/i18n/es/translations.json @@ -1,67 +1,68 @@ { + "A descriptive name for your function": "Un nombre descriptivo para su función.", + "API Token": "Ficha API", + "API Token for Inference Server": "Token API para servidor de inferencia", + "Active": "Activo", "Assistant Settings": "Configuración del asistente", - "Give your personal assistant a name and describe its personality. Choose a model that should be used for your assistant.": "Dale un nombre a tu asistente personal y describe su personalidad. Elija un modelo que deba usarse para su asistente.", - "Name": "Nombre", - "Name for the Assistant": "Nombre del asistente", - "Model": "Modelo", - "Which Model should be used": "¿Qué modelo se debe utilizar?", - "Personality": "Personalidad", + "Assistant can use Object": "El asistente puede usar el objeto", + "Assistant can use this function": "El asistente puede usar esta función", + "Custom functions for assistant": "Funciones personalizadas para asistente", + "Datapoint (Request)": "Punto de datos (Solicitud)", + "Datapoint (Result)": "Punto de datos (resultado)", + "Debug / Chain-of-Thought Output": "Salida de depuración/cadena de pensamiento", + "Define custom functions for the assistant. Make sure to add a good description for your functions so the assistant knows when to call your function. Each function needs a datapoint that starts the process and another datapoint that contains the result of your function.": "Definir funciones personalizadas para el asistente. Asegúrese de agregar una buena descripción de sus funciones para que el asistente sepa cuándo llamar a su función. Cada función necesita un punto de datos que inicie el proceso y otro punto de datos que contenga el resultado de su función.", "Describe the personality of your assistant": "Describe la personalidad de tu asistente.", - "Friendly and helpful": "Amable y servicial", - "Language": "Idioma", - "Select the language that should be used by the assistant": "Seleccione el idioma que debe utilizar el asistente", + "Describe what your function does and how the data for the request should look. This is important for the assistant to understand your function.": "Describe qué hace tu función y cómo deben verse los datos de la solicitud. Esto es importante para que el asistente comprenda su función.", + "Description": "Descripción", + "Do you really want to import objects from enum.rooms? Existing objects will be reset!": "¿Realmente quieres importar objetos desde enum.rooms? ¡Los objetos existentes se restablecerán!", + "ERROR: column 'Model' must contain unique text": "ERROR: la columna 'Modelo' debe contener texto único", "English": "Inglés", + "Friendly and helpful": "Amable y servicial", + "Functions": "Funciones", "German": "Alemán", - "Debug / Chain-of-Thought Output": "Salida de depuración/cadena de pensamiento", - "When activated the internal thought process of the assistant will be written to the response datapoint": "Cuando se activa, el proceso de pensamiento interno del asistente se escribirá en el punto de datos de respuesta.", - "Model Settings": "Configuración del modelo", - "Select how many messages should be included for context retention. Temperature defines creativity/randomness of output from 0-1 where 0 is the most predictable output. Set how many tokens should be generated max for assistant responses.": "Seleccione cuántos mensajes se deben incluir para la retención de contexto. La temperatura define la creatividad/aleatoriedad del resultado de 0 a 1, donde 0 es el resultado más predecible. Establezca cuántos tokens se deben generar como máximo para las respuestas del asistente.", - "Message History (Chat Mode)": "Historial de mensajes (modo chat)", + "Give your personal assistant a name and describe its personality. Choose a model that should be used for your assistant.": "Dale un nombre a tu asistente personal y describe su personalidad. Elija un modelo que deba usarse para su asistente.", + "How long to wait between retries": "¿Cuánto tiempo esperar entre reintentos?", + "How many times should we retry if request to model fails": "¿Cuántas veces debemos volver a intentarlo si falla la solicitud de modelo?", "If greater 0 previous messages will be included in the request so the tool will stay in context": "Si es mayor, se incluirán 0 mensajes anteriores en la solicitud para que la herramienta permanezca en contexto", - "Temperature": "Temperatura", - "Setting for creativity/consistency of the models response. (Leave at default if you are not sure!=": "Ajuste para la creatividad/consistencia de la respuesta del modelo. (¡Déjelo como predeterminado si no está seguro!=", - "Max. Tokens": "Máx. Fichas", + "Import objects from enum.rooms": "Importar objetos desde enum.rooms", + "Language": "Idioma", "Limit the response of the tool to your desired amount of tokens.": "Limite la respuesta de la herramienta a la cantidad deseada de tokens.", - "Request Settings": "Solicitar configuración", - "Select if failed requests to the assistant should be retried and how long to wait between tries.": "Seleccione si se deben volver a intentar las solicitudes fallidas al asistente y cuánto tiempo esperar entre intentos.", + "Link to LM Studio": "Enlace a LM Estudio", + "Link to LocalAI": "Enlace a LocalAI", "Max. Retries": "Máx. Reintentos", - "How many times should we retry if request to model fails": "¿Cuántas veces debemos volver a intentarlo si falla la solicitud de modelo?", - "Retry Delay": "Retardo de reintento", - "How long to wait between retries": "¿Cuánto tiempo esperar entre reintentos?", + "Max. Tokens": "Máx. Fichas", + "Message History (Chat Mode)": "Historial de mensajes (modo chat)", + "Model": "Modelo", + "Model Settings": "Configuración del modelo", + "Model is active": "El modelo está activo.", + "Models": "Modelos", + "Name": "Nombre", + "Name for the Assistant": "Nombre del asistente", + "Name of the Model": "Nombre del modelo", + "Object": "Objeto", "Object access for assistant": "Acceso a objetos para asistente", - "Please add the objects you want to use with the assistant. The assistant will be able to read and control these objects. You can use the button to import all states from your configured room sorting. Make sure to only include needed states to save tokens.": "Agregue los objetos que desea usar con el asistente. El asistente podrá leer y controlar estos objetos. Puede utilizar el botón para importar todos los estados desde la clasificación de habitaciones configurada. Asegúrese de incluir solo los estados necesarios para guardar tokens.", - "Import objects from enum.rooms": "Importar objetos desde enum.rooms", - "Do you really want to import objects from enum.rooms? Existing objects will be reset!": "¿Realmente quieres importar objetos desde enum.rooms? ¡Los objetos existentes se restablecerán!", "Objects": "Objetos", - "Active": "Activo", - "Assistant can use Object": "El asistente puede usar el objeto", - "Sort": "Clasificar", - "Room or sorting for Object": "Habitación o clasificación por objeto", - "Object": "Objeto", - "Custom functions for assistant": "Funciones personalizadas para asistente", - "Define custom functions for the assistant. Make sure to add a good description for your functions so the assistant knows when to call your function. Each function needs a datapoint that starts the process and another datapoint that contains the result of your function.": "Definir funciones personalizadas para el asistente. Asegúrese de agregar una buena descripción de sus funciones para que el asistente sepa cuándo llamar a su función. Cada función necesita un punto de datos que inicie el proceso y otro punto de datos que contenga el resultado de su función.", - "Functions": "Funciones", - "Assistant can use this function": "El asistente puede usar esta función", - "A descriptive name for your function": "Un nombre descriptivo para su función.", - "Description": "Descripción", - "Describe what your function does and how the data for the request should look. This is important for the assistant to understand your function.": "Describe qué hace tu función y cómo deben verse los datos de la solicitud. Esto es importante para que el asistente comprenda su función.", - "Datapoint (Request)": "Punto de datos (Solicitud)", - "The datapoint that starts the request for the function": "El punto de datos que inicia la solicitud de la función.", - "Datapoint (Result)": "Punto de datos (resultado)", - "The datapoint that contains the result of your function call (Has to be fulfilled in 60 Seconds!)": "El punto de datos que contiene el resultado de su llamada a función (¡debe cumplirse en 60 segundos!)", + "Personality": "Personalidad", + "Please add the objects you want to use with the assistant. The assistant will be able to read and control these objects. You can use the button to import all states from your configured room sorting. Make sure to only include needed states to save tokens.": "Agregue los objetos que desea usar con el asistente. El asistente podrá leer y controlar estos objetos. Puede utilizar el botón para importar todos los estados desde la clasificación de habitaciones configurada. Asegúrese de incluir solo los estados necesarios para guardar tokens.", "Please enter your Anthropic API Token to start using models like Opus, Haiku and Sonnet. If there are new models released you can simply add them in the table to start using them with ai assistants.": "Ingrese su token API Anthropic para comenzar a usar modelos como Opus, Haiku y Sonnet. Si se lanzan nuevos modelos, simplemente puede agregarlos en la tabla para comenzar a usarlos con asistentes de inteligencia artificial.", - "Settings": "Ajustes", - "API Token": "Ficha API", - "ERROR: column 'Model' must contain unique text": "ERROR: la columna 'Modelo' debe contener texto único", - "Models": "Modelos", - "Model is active": "El modelo está activo.", - "Name of the Model": "Nombre del modelo", + "Please enter your Deepseek API Token to start using the models. If there are new models released you can simply add them in the table to start using them with ai assistants.": "Ingrese su token API Deepseek para comenzar a usar los modelos. Si hay nuevos modelos lanzados, simplemente puede agregarlos a la mesa para comenzar a usarlos con asistentes de IA.", "Please enter your OpenAI API Token to start using models like Gpt4, Gpt4-o1, Gpt3-5. If there are new models released you can simply add them in the table to start using them with ai assistants.": "Ingrese su token API de OpenAI para comenzar a usar modelos como Gpt4, Gpt4-o1, Gpt3-5. Si se lanzan nuevos modelos, simplemente puede agregarlos en la tabla para comenzar a usarlos con asistentes de inteligencia artificial.", - "Please enter your Perplexity API Token to start using the models. If there are new models released you can simply add them in the table to start using them with ai assistants.": "Ingrese su token API de Perplexity para comenzar a usar los modelos. Si se lanzan nuevos modelos, simplemente puede agregarlos en la tabla para comenzar a usarlos con asistentes de inteligencia artificial.", "Please enter your Openrouter API Token to start using the models. If there are new models released you can simply add them in the table to start using them with ai assistants.": "Ingrese su token API de Openrouter para comenzar a usar los modelos. Si se lanzan nuevos modelos, simplemente puede agregarlos en la tabla para comenzar a usarlos con asistentes de inteligencia artificial.", - "You can use your custom or self hosted inference server to run open source models. The server needs to follow the rest api standards used by many providers, see examples below. Please make sure to add your used models by name to the table below.": "Puede utilizar su servidor de inferencia personalizado o autohospedado para ejecutar modelos de código abierto. El servidor debe seguir el resto de los estándares API utilizados por muchos proveedores; consulte los ejemplos a continuación. Asegúrese de agregar sus modelos usados ​​por nombre a la siguiente tabla.", - "Link to LM Studio": "Enlace a LM Estudio", - "Link to LocalAI": "Enlace a LocalAI", + "Please enter your Perplexity API Token to start using the models. If there are new models released you can simply add them in the table to start using them with ai assistants.": "Ingrese su token API de Perplexity para comenzar a usar los modelos. Si se lanzan nuevos modelos, simplemente puede agregarlos en la tabla para comenzar a usarlos con asistentes de inteligencia artificial.", + "Request Settings": "Solicitar configuración", + "Retry Delay": "Retardo de reintento", + "Room or sorting for Object": "Habitación o clasificación por objeto", + "Select how many messages should be included for context retention. Temperature defines creativity/randomness of output from 0-1 where 0 is the most predictable output. Set how many tokens should be generated max for assistant responses.": "Seleccione cuántos mensajes se deben incluir para la retención de contexto. La temperatura define la creatividad/aleatoriedad del resultado de 0 a 1, donde 0 es el resultado más predecible. Establezca cuántos tokens se deben generar como máximo para las respuestas del asistente.", + "Select if failed requests to the assistant should be retried and how long to wait between tries.": "Seleccione si se deben volver a intentar las solicitudes fallidas al asistente y cuánto tiempo esperar entre intentos.", + "Select the language that should be used by the assistant": "Seleccione el idioma que debe utilizar el asistente", + "Setting for creativity/consistency of the models response. (Leave at default if you are not sure!=": "Ajuste para la creatividad/consistencia de la respuesta del modelo. (¡Déjelo como predeterminado si no está seguro!=", + "Settings": "Ajustes", + "Sort": "Clasificar", + "Temperature": "Temperatura", + "The datapoint that contains the result of your function call (Has to be fulfilled in 60 Seconds!)": "El punto de datos que contiene el resultado de su llamada a función (¡debe cumplirse en 60 segundos!)", + "The datapoint that starts the request for the function": "El punto de datos que inicia la solicitud de la función.", "URL for Inference Server": "URL para el servidor de inferencia", - "API Token for Inference Server": "Token API para servidor de inferencia" + "When activated the internal thought process of the assistant will be written to the response datapoint": "Cuando se activa, el proceso de pensamiento interno del asistente se escribirá en el punto de datos de respuesta.", + "Which Model should be used": "¿Qué modelo se debe utilizar?", + "You can use your custom or self hosted inference server to run open source models. The server needs to follow the rest api standards used by many providers, see examples below. Please make sure to add your used models by name to the table below.": "Puede utilizar su servidor de inferencia personalizado o autohospedado para ejecutar modelos de código abierto. El servidor debe seguir el resto de los estándares API utilizados por muchos proveedores; consulte los ejemplos a continuación. Asegúrese de agregar sus modelos usados ​​por nombre a la siguiente tabla." } diff --git a/admin/i18n/fr/translations.json b/admin/i18n/fr/translations.json index ceb7318..5196662 100644 --- a/admin/i18n/fr/translations.json +++ b/admin/i18n/fr/translations.json @@ -1,67 +1,68 @@ { + "A descriptive name for your function": "Un nom descriptif pour votre fonction", + "API Token": "Jeton API", + "API Token for Inference Server": "Jeton API pour le serveur d'inférence", + "Active": "Actif", "Assistant Settings": "Paramètres de l'assistant", - "Give your personal assistant a name and describe its personality. Choose a model that should be used for your assistant.": "Donnez un nom à votre assistant personnel et décrivez sa personnalité. Choisissez un modèle qui doit être utilisé pour votre assistant.", - "Name": "Nom", - "Name for the Assistant": "Nom de l'assistant", - "Model": "Modèle", - "Which Model should be used": "Quel modèle doit être utilisé", - "Personality": "Personnalité", + "Assistant can use Object": "L'assistant peut utiliser l'objet", + "Assistant can use this function": "L'assistant peut utiliser cette fonction", + "Custom functions for assistant": "Fonctions personnalisées pour l'assistant", + "Datapoint (Request)": "Point de données (demande)", + "Datapoint (Result)": "Point de données (résultat)", + "Debug / Chain-of-Thought Output": "Sortie de débogage/chaîne de pensée", + "Define custom functions for the assistant. Make sure to add a good description for your functions so the assistant knows when to call your function. Each function needs a datapoint that starts the process and another datapoint that contains the result of your function.": "Définissez des fonctions personnalisées pour l'assistant. Assurez-vous d'ajouter une bonne description de vos fonctions afin que l'assistant sache quand appeler votre fonction. Chaque fonction a besoin d'un point de données qui démarre le processus et d'un autre point de données qui contient le résultat de votre fonction.", "Describe the personality of your assistant": "Décrivez la personnalité de votre assistant", - "Friendly and helpful": "Sympathique et serviable", - "Language": "Langue", - "Select the language that should be used by the assistant": "Sélectionnez la langue qui doit être utilisée par l'assistant", + "Describe what your function does and how the data for the request should look. This is important for the assistant to understand your function.": "Décrivez ce que fait votre fonction et à quoi devraient ressembler les données de la requête. Ceci est important pour que l’assistant comprenne votre fonction.", + "Description": "Description", + "Do you really want to import objects from enum.rooms? Existing objects will be reset!": "Voulez-vous vraiment importer des objets depuis enum.rooms ? Les objets existants seront réinitialisés !", + "ERROR: column 'Model' must contain unique text": "ERREUR : la colonne \"Modèle\" doit contenir un texte unique", "English": "Anglais", + "Friendly and helpful": "Sympathique et serviable", + "Functions": "Fonctions", "German": "Allemand", - "Debug / Chain-of-Thought Output": "Sortie de débogage/chaîne de pensée", - "When activated the internal thought process of the assistant will be written to the response datapoint": "Lorsqu'il est activé, le processus de réflexion interne de l'assistant sera écrit dans le point de données de réponse.", - "Model Settings": "Paramètres du modèle", - "Select how many messages should be included for context retention. Temperature defines creativity/randomness of output from 0-1 where 0 is the most predictable output. Set how many tokens should be generated max for assistant responses.": "Sélectionnez le nombre de messages à inclure pour la conservation du contexte. La température définit la créativité/le caractère aléatoire de la sortie de 0 à 1, où 0 est la sortie la plus prévisible. Définissez le nombre maximum de jetons qui doivent être générés pour les réponses de l'assistant.", - "Message History (Chat Mode)": "Historique des messages (mode chat)", + "Give your personal assistant a name and describe its personality. Choose a model that should be used for your assistant.": "Donnez un nom à votre assistant personnel et décrivez sa personnalité. Choisissez un modèle qui doit être utilisé pour votre assistant.", + "How long to wait between retries": "Combien de temps attendre entre les tentatives", + "How many times should we retry if request to model fails": "Combien de fois devons-nous réessayer si la demande de modélisation échoue", "If greater 0 previous messages will be included in the request so the tool will stay in context": "Si supérieur à 0, les messages précédents seront inclus dans la requête afin que l'outil reste en contexte", - "Temperature": "Température", - "Setting for creativity/consistency of the models response. (Leave at default if you are not sure!=": "Paramètre de créativité/cohérence de la réponse du modèle. (Laissez la valeur par défaut si vous n'êtes pas sûr !=", - "Max. Tokens": "Max. Jetons", + "Import objects from enum.rooms": "Importer des objets depuis enum.rooms", + "Language": "Langue", "Limit the response of the tool to your desired amount of tokens.": "Limitez la réponse de l'outil à la quantité de jetons souhaitée.", - "Request Settings": "Paramètres de la demande", - "Select if failed requests to the assistant should be retried and how long to wait between tries.": "Sélectionnez si les demandes échouées adressées à l'assistant doivent être réessayées et combien de temps attendre entre les tentatives.", + "Link to LM Studio": "Lien vers LM Studio", + "Link to LocalAI": "Lien vers LocalAI", "Max. Retries": "Max. Nouvelles tentatives", - "How many times should we retry if request to model fails": "Combien de fois devons-nous réessayer si la demande de modélisation échoue", - "Retry Delay": "Délai de nouvelle tentative", - "How long to wait between retries": "Combien de temps attendre entre les tentatives", + "Max. Tokens": "Max. Jetons", + "Message History (Chat Mode)": "Historique des messages (mode chat)", + "Model": "Modèle", + "Model Settings": "Paramètres du modèle", + "Model is active": "Le modèle est actif", + "Models": "Modèles", + "Name": "Nom", + "Name for the Assistant": "Nom de l'assistant", + "Name of the Model": "Nom du modèle", + "Object": "Objet", "Object access for assistant": "Accès aux objets pour l'assistant", - "Please add the objects you want to use with the assistant. The assistant will be able to read and control these objects. You can use the button to import all states from your configured room sorting. Make sure to only include needed states to save tokens.": "Veuillez ajouter les objets que vous souhaitez utiliser avec l'assistant. L'assistant sera capable de lire et de contrôler ces objets. Vous pouvez utiliser le bouton pour importer tous les états de votre tri de pièce configuré. Assurez-vous d'inclure uniquement les états nécessaires pour enregistrer les jetons.", - "Import objects from enum.rooms": "Importer des objets depuis enum.rooms", - "Do you really want to import objects from enum.rooms? Existing objects will be reset!": "Voulez-vous vraiment importer des objets depuis enum.rooms ? Les objets existants seront réinitialisés !", "Objects": "Objets", - "Active": "Actif", - "Assistant can use Object": "L'assistant peut utiliser l'objet", - "Sort": "Trier", - "Room or sorting for Object": "Pièce ou tri pour Objet", - "Object": "Objet", - "Custom functions for assistant": "Fonctions personnalisées pour l'assistant", - "Define custom functions for the assistant. Make sure to add a good description for your functions so the assistant knows when to call your function. Each function needs a datapoint that starts the process and another datapoint that contains the result of your function.": "Définissez des fonctions personnalisées pour l'assistant. Assurez-vous d'ajouter une bonne description de vos fonctions afin que l'assistant sache quand appeler votre fonction. Chaque fonction a besoin d'un point de données qui démarre le processus et d'un autre point de données qui contient le résultat de votre fonction.", - "Functions": "Fonctions", - "Assistant can use this function": "L'assistant peut utiliser cette fonction", - "A descriptive name for your function": "Un nom descriptif pour votre fonction", - "Description": "Description", - "Describe what your function does and how the data for the request should look. This is important for the assistant to understand your function.": "Décrivez ce que fait votre fonction et à quoi devraient ressembler les données de la requête. Ceci est important pour que l’assistant comprenne votre fonction.", - "Datapoint (Request)": "Point de données (demande)", - "The datapoint that starts the request for the function": "Le point de données qui démarre la demande pour la fonction", - "Datapoint (Result)": "Point de données (résultat)", - "The datapoint that contains the result of your function call (Has to be fulfilled in 60 Seconds!)": "Le point de données qui contient le résultat de votre appel de fonction (doit être rempli en 60 secondes !)", + "Personality": "Personnalité", + "Please add the objects you want to use with the assistant. The assistant will be able to read and control these objects. You can use the button to import all states from your configured room sorting. Make sure to only include needed states to save tokens.": "Veuillez ajouter les objets que vous souhaitez utiliser avec l'assistant. L'assistant sera capable de lire et de contrôler ces objets. Vous pouvez utiliser le bouton pour importer tous les états de votre tri de pièce configuré. Assurez-vous d'inclure uniquement les états nécessaires pour enregistrer les jetons.", "Please enter your Anthropic API Token to start using models like Opus, Haiku and Sonnet. If there are new models released you can simply add them in the table to start using them with ai assistants.": "Veuillez saisir votre jeton API Anthropic pour commencer à utiliser des modèles comme Opus, Haiku et Sonnet. Si de nouveaux modèles sont publiés, vous pouvez simplement les ajouter dans le tableau pour commencer à les utiliser avec les assistants IA.", - "Settings": "Paramètres", - "API Token": "Jeton API", - "ERROR: column 'Model' must contain unique text": "ERREUR : la colonne \"Modèle\" doit contenir un texte unique", - "Models": "Modèles", - "Model is active": "Le modèle est actif", - "Name of the Model": "Nom du modèle", + "Please enter your Deepseek API Token to start using the models. If there are new models released you can simply add them in the table to start using them with ai assistants.": "Veuillez saisir votre jeton API Deepseek pour commencer à utiliser les modèles. S'il y a de nouveaux modèles publiés, vous pouvez simplement les ajouter dans le tableau pour commencer à les utiliser avec des assistants en IA.", "Please enter your OpenAI API Token to start using models like Gpt4, Gpt4-o1, Gpt3-5. If there are new models released you can simply add them in the table to start using them with ai assistants.": "Veuillez saisir votre jeton API OpenAI pour commencer à utiliser des modèles tels que Gpt4, Gpt4-o1, Gpt3-5. Si de nouveaux modèles sont publiés, vous pouvez simplement les ajouter dans le tableau pour commencer à les utiliser avec les assistants IA.", - "Please enter your Perplexity API Token to start using the models. If there are new models released you can simply add them in the table to start using them with ai assistants.": "Veuillez saisir votre jeton API Perplexity pour commencer à utiliser les modèles. Si de nouveaux modèles sont publiés, vous pouvez simplement les ajouter dans le tableau pour commencer à les utiliser avec les assistants IA.", "Please enter your Openrouter API Token to start using the models. If there are new models released you can simply add them in the table to start using them with ai assistants.": "Veuillez saisir votre jeton API Openrouter pour commencer à utiliser les modèles. Si de nouveaux modèles sont publiés, vous pouvez simplement les ajouter dans le tableau pour commencer à les utiliser avec les assistants IA.", - "You can use your custom or self hosted inference server to run open source models. The server needs to follow the rest api standards used by many providers, see examples below. Please make sure to add your used models by name to the table below.": "Vous pouvez utiliser votre serveur d'inférence personnalisé ou auto-hébergé pour exécuter des modèles open source. Le serveur doit suivre les autres normes API utilisées par de nombreux fournisseurs, voir les exemples ci-dessous. Veuillez vous assurer d'ajouter vos modèles utilisés par leur nom au tableau ci-dessous.", - "Link to LM Studio": "Lien vers LM Studio", - "Link to LocalAI": "Lien vers LocalAI", + "Please enter your Perplexity API Token to start using the models. If there are new models released you can simply add them in the table to start using them with ai assistants.": "Veuillez saisir votre jeton API Perplexity pour commencer à utiliser les modèles. Si de nouveaux modèles sont publiés, vous pouvez simplement les ajouter dans le tableau pour commencer à les utiliser avec les assistants IA.", + "Request Settings": "Paramètres de la demande", + "Retry Delay": "Délai de nouvelle tentative", + "Room or sorting for Object": "Pièce ou tri pour Objet", + "Select how many messages should be included for context retention. Temperature defines creativity/randomness of output from 0-1 where 0 is the most predictable output. Set how many tokens should be generated max for assistant responses.": "Sélectionnez le nombre de messages à inclure pour la conservation du contexte. La température définit la créativité/le caractère aléatoire de la sortie de 0 à 1, où 0 est la sortie la plus prévisible. Définissez le nombre maximum de jetons qui doivent être générés pour les réponses de l'assistant.", + "Select if failed requests to the assistant should be retried and how long to wait between tries.": "Sélectionnez si les demandes échouées adressées à l'assistant doivent être réessayées et combien de temps attendre entre les tentatives.", + "Select the language that should be used by the assistant": "Sélectionnez la langue qui doit être utilisée par l'assistant", + "Setting for creativity/consistency of the models response. (Leave at default if you are not sure!=": "Paramètre de créativité/cohérence de la réponse du modèle. (Laissez la valeur par défaut si vous n'êtes pas sûr !=", + "Settings": "Paramètres", + "Sort": "Trier", + "Temperature": "Température", + "The datapoint that contains the result of your function call (Has to be fulfilled in 60 Seconds!)": "Le point de données qui contient le résultat de votre appel de fonction (doit être rempli en 60 secondes !)", + "The datapoint that starts the request for the function": "Le point de données qui démarre la demande pour la fonction", "URL for Inference Server": "URL du serveur d'inférence", - "API Token for Inference Server": "Jeton API pour le serveur d'inférence" + "When activated the internal thought process of the assistant will be written to the response datapoint": "Lorsqu'il est activé, le processus de réflexion interne de l'assistant sera écrit dans le point de données de réponse.", + "Which Model should be used": "Quel modèle doit être utilisé", + "You can use your custom or self hosted inference server to run open source models. The server needs to follow the rest api standards used by many providers, see examples below. Please make sure to add your used models by name to the table below.": "Vous pouvez utiliser votre serveur d'inférence personnalisé ou auto-hébergé pour exécuter des modèles open source. Le serveur doit suivre les autres normes API utilisées par de nombreux fournisseurs, voir les exemples ci-dessous. Veuillez vous assurer d'ajouter vos modèles utilisés par leur nom au tableau ci-dessous." } diff --git a/admin/i18n/it/translations.json b/admin/i18n/it/translations.json index 85023cb..a2353c7 100644 --- a/admin/i18n/it/translations.json +++ b/admin/i18n/it/translations.json @@ -1,67 +1,68 @@ { + "A descriptive name for your function": "Un nome descrittivo per la tua funzione", + "API Token": "Gettone API", + "API Token for Inference Server": "Token API per il server di inferenza", + "Active": "Attivo", "Assistant Settings": "Impostazioni dell'assistente", - "Give your personal assistant a name and describe its personality. Choose a model that should be used for your assistant.": "Dai un nome al tuo assistente personale e descrivi la sua personalità. Scegli un modello che dovrebbe essere utilizzato per il tuo assistente.", - "Name": "Nome", - "Name for the Assistant": "Nome per l'Assistente", - "Model": "Modello", - "Which Model should be used": "Quale modello dovrebbe essere utilizzato", - "Personality": "Personalità", + "Assistant can use Object": "L'assistente può utilizzare l'oggetto", + "Assistant can use this function": "L'assistente può utilizzare questa funzione", + "Custom functions for assistant": "Funzioni personalizzate per l'assistente", + "Datapoint (Request)": "Punto dati (richiesto)", + "Datapoint (Result)": "Punto dati (risultato)", + "Debug / Chain-of-Thought Output": "Output di debug/catena di pensiero", + "Define custom functions for the assistant. Make sure to add a good description for your functions so the assistant knows when to call your function. Each function needs a datapoint that starts the process and another datapoint that contains the result of your function.": "Definire funzioni personalizzate per l'assistente. Assicurati di aggiungere una buona descrizione per le tue funzioni in modo che l'assistente sappia quando chiamare la tua funzione. Ogni funzione necessita di un punto dati che avvii il processo e di un altro punto dati che contenga il risultato della funzione.", "Describe the personality of your assistant": "Descrivi la personalità del tuo assistente", - "Friendly and helpful": "Cordiale e disponibile", - "Language": "Lingua", - "Select the language that should be used by the assistant": "Seleziona la lingua che dovrà essere utilizzata dall'assistente", + "Describe what your function does and how the data for the request should look. This is important for the assistant to understand your function.": "Descrivi cosa fa la tua funzione e come dovrebbero apparire i dati per la richiesta. Questo è importante affinché l'assistente comprenda la tua funzione.", + "Description": "Descrizione", + "Do you really want to import objects from enum.rooms? Existing objects will be reset!": "Vuoi davvero importare oggetti da enum.rooms? Gli oggetti esistenti verranno ripristinati!", + "ERROR: column 'Model' must contain unique text": "ERRORE: la colonna \"Modello\" deve contenere testo univoco", "English": "Inglese", + "Friendly and helpful": "Cordiale e disponibile", + "Functions": "Funzioni", "German": "tedesco", - "Debug / Chain-of-Thought Output": "Output di debug/catena di pensiero", - "When activated the internal thought process of the assistant will be written to the response datapoint": "Una volta attivato, il processo di pensiero interno dell'assistente verrà scritto nel datapoint della risposta", - "Model Settings": "Impostazioni del modello", - "Select how many messages should be included for context retention. Temperature defines creativity/randomness of output from 0-1 where 0 is the most predictable output. Set how many tokens should be generated max for assistant responses.": "Seleziona il numero di messaggi da includere per la conservazione del contesto. La temperatura definisce la creatività/casualità dell'output da 0-1 dove 0 è l'output più prevedibile. Imposta il numero massimo di token da generare per le risposte dell'assistente.", - "Message History (Chat Mode)": "Cronologia dei messaggi (modalità chat)", + "Give your personal assistant a name and describe its personality. Choose a model that should be used for your assistant.": "Dai un nome al tuo assistente personale e descrivi la sua personalità. Scegli un modello che dovrebbe essere utilizzato per il tuo assistente.", + "How long to wait between retries": "Quanto tempo attendere tra un nuovo tentativo e l'altro", + "How many times should we retry if request to model fails": "Quante volte dovremmo riprovare se la richiesta al modello fallisce", "If greater 0 previous messages will be included in the request so the tool will stay in context": "Se maggiore di 0 i messaggi precedenti verranno inclusi nella richiesta, lo strumento rimarrà nel contesto", - "Temperature": "Temperatura", - "Setting for creativity/consistency of the models response. (Leave at default if you are not sure!=": "Impostazione per creatività/coerenza della risposta dei modelli. (Lascia il valore predefinito se non sei sicuro!=", - "Max. Tokens": "Massimo. Gettoni", + "Import objects from enum.rooms": "Importa oggetti da enum.rooms", + "Language": "Lingua", "Limit the response of the tool to your desired amount of tokens.": "Limita la risposta dello strumento alla quantità di token desiderata.", - "Request Settings": "Richiedi impostazioni", - "Select if failed requests to the assistant should be retried and how long to wait between tries.": "Seleziona se le richieste non riuscite all'assistente devono essere ritentate e quanto tempo attendere tra un tentativo e l'altro.", + "Link to LM Studio": "Collegamento a LM Studio", + "Link to LocalAI": "Collegamento a LocalAI", "Max. Retries": "Massimo. Nuovi tentativi", - "How many times should we retry if request to model fails": "Quante volte dovremmo riprovare se la richiesta al modello fallisce", - "Retry Delay": "Ritardo riprova", - "How long to wait between retries": "Quanto tempo attendere tra un nuovo tentativo e l'altro", + "Max. Tokens": "Massimo. Gettoni", + "Message History (Chat Mode)": "Cronologia dei messaggi (modalità chat)", + "Model": "Modello", + "Model Settings": "Impostazioni del modello", + "Model is active": "Il modello è attivo", + "Models": "Modelli", + "Name": "Nome", + "Name for the Assistant": "Nome per l'Assistente", + "Name of the Model": "Nome del modello", + "Object": "Oggetto", "Object access for assistant": "Accesso agli oggetti per l'assistente", - "Please add the objects you want to use with the assistant. The assistant will be able to read and control these objects. You can use the button to import all states from your configured room sorting. Make sure to only include needed states to save tokens.": "Aggiungi gli oggetti che desideri utilizzare con l'assistente. L'assistente sarà in grado di leggere e controllare questi oggetti. Puoi utilizzare il pulsante per importare tutti gli stati dall'ordinamento delle stanze configurato. Assicurati di includere solo gli stati necessari per salvare i token.", - "Import objects from enum.rooms": "Importa oggetti da enum.rooms", - "Do you really want to import objects from enum.rooms? Existing objects will be reset!": "Vuoi davvero importare oggetti da enum.rooms? Gli oggetti esistenti verranno ripristinati!", "Objects": "Oggetti", - "Active": "Attivo", - "Assistant can use Object": "L'assistente può utilizzare l'oggetto", - "Sort": "Ordinare", - "Room or sorting for Object": "Stanza o ordinamento per oggetto", - "Object": "Oggetto", - "Custom functions for assistant": "Funzioni personalizzate per l'assistente", - "Define custom functions for the assistant. Make sure to add a good description for your functions so the assistant knows when to call your function. Each function needs a datapoint that starts the process and another datapoint that contains the result of your function.": "Definire funzioni personalizzate per l'assistente. Assicurati di aggiungere una buona descrizione per le tue funzioni in modo che l'assistente sappia quando chiamare la tua funzione. Ogni funzione necessita di un punto dati che avvii il processo e di un altro punto dati che contenga il risultato della funzione.", - "Functions": "Funzioni", - "Assistant can use this function": "L'assistente può utilizzare questa funzione", - "A descriptive name for your function": "Un nome descrittivo per la tua funzione", - "Description": "Descrizione", - "Describe what your function does and how the data for the request should look. This is important for the assistant to understand your function.": "Descrivi cosa fa la tua funzione e come dovrebbero apparire i dati per la richiesta. Questo è importante affinché l'assistente comprenda la tua funzione.", - "Datapoint (Request)": "Punto dati (richiesto)", - "The datapoint that starts the request for the function": "Il punto dati che avvia la richiesta per la funzione", - "Datapoint (Result)": "Punto dati (risultato)", - "The datapoint that contains the result of your function call (Has to be fulfilled in 60 Seconds!)": "Il punto dati che contiene il risultato della chiamata di funzione (deve essere soddisfatto in 60 secondi!)", + "Personality": "Personalità", + "Please add the objects you want to use with the assistant. The assistant will be able to read and control these objects. You can use the button to import all states from your configured room sorting. Make sure to only include needed states to save tokens.": "Aggiungi gli oggetti che desideri utilizzare con l'assistente. L'assistente sarà in grado di leggere e controllare questi oggetti. Puoi utilizzare il pulsante per importare tutti gli stati dall'ordinamento delle stanze configurato. Assicurati di includere solo gli stati necessari per salvare i token.", "Please enter your Anthropic API Token to start using models like Opus, Haiku and Sonnet. If there are new models released you can simply add them in the table to start using them with ai assistants.": "Inserisci il tuo token API Anthropic per iniziare a utilizzare modelli come Opus, Haiku e Sonnet. Se vengono rilasciati nuovi modelli puoi semplicemente aggiungerli nella tabella per iniziare ad usarli con gli assistenti ai.", - "Settings": "Impostazioni", - "API Token": "Gettone API", - "ERROR: column 'Model' must contain unique text": "ERRORE: la colonna \"Modello\" deve contenere testo univoco", - "Models": "Modelli", - "Model is active": "Il modello è attivo", - "Name of the Model": "Nome del modello", + "Please enter your Deepseek API Token to start using the models. If there are new models released you can simply add them in the table to start using them with ai assistants.": "Inserisci il token API DeepSeek per iniziare a utilizzare i modelli. Se ci sono nuovi modelli rilasciati, puoi semplicemente aggiungerli nella tabella per iniziare a usarli con gli assistenti di intelligenza artificiale.", "Please enter your OpenAI API Token to start using models like Gpt4, Gpt4-o1, Gpt3-5. If there are new models released you can simply add them in the table to start using them with ai assistants.": "Inserisci il tuo token API OpenAI per iniziare a utilizzare modelli come Gpt4, Gpt4-o1, Gpt3-5. Se vengono rilasciati nuovi modelli puoi semplicemente aggiungerli nella tabella per iniziare ad usarli con gli assistenti ai.", - "Please enter your Perplexity API Token to start using the models. If there are new models released you can simply add them in the table to start using them with ai assistants.": "Inserisci il tuo token API Perplexity per iniziare a utilizzare i modelli. Se vengono rilasciati nuovi modelli puoi semplicemente aggiungerli nella tabella per iniziare ad usarli con gli assistenti ai.", "Please enter your Openrouter API Token to start using the models. If there are new models released you can simply add them in the table to start using them with ai assistants.": "Inserisci il token API Openrouter per iniziare a utilizzare i modelli. Se vengono rilasciati nuovi modelli puoi semplicemente aggiungerli nella tabella per iniziare ad usarli con gli assistenti ai.", - "You can use your custom or self hosted inference server to run open source models. The server needs to follow the rest api standards used by many providers, see examples below. Please make sure to add your used models by name to the table below.": "Puoi utilizzare il tuo server di inferenza personalizzato o ospitato autonomamente per eseguire modelli open source. Il server deve seguire gli altri standard API utilizzati da molti provider, vedere gli esempi di seguito. Assicurati di aggiungere i modelli usati per nome alla tabella seguente.", - "Link to LM Studio": "Collegamento a LM Studio", - "Link to LocalAI": "Collegamento a LocalAI", + "Please enter your Perplexity API Token to start using the models. If there are new models released you can simply add them in the table to start using them with ai assistants.": "Inserisci il tuo token API Perplexity per iniziare a utilizzare i modelli. Se vengono rilasciati nuovi modelli puoi semplicemente aggiungerli nella tabella per iniziare ad usarli con gli assistenti ai.", + "Request Settings": "Richiedi impostazioni", + "Retry Delay": "Ritardo riprova", + "Room or sorting for Object": "Stanza o ordinamento per oggetto", + "Select how many messages should be included for context retention. Temperature defines creativity/randomness of output from 0-1 where 0 is the most predictable output. Set how many tokens should be generated max for assistant responses.": "Seleziona il numero di messaggi da includere per la conservazione del contesto. La temperatura definisce la creatività/casualità dell'output da 0-1 dove 0 è l'output più prevedibile. Imposta il numero massimo di token da generare per le risposte dell'assistente.", + "Select if failed requests to the assistant should be retried and how long to wait between tries.": "Seleziona se le richieste non riuscite all'assistente devono essere ritentate e quanto tempo attendere tra un tentativo e l'altro.", + "Select the language that should be used by the assistant": "Seleziona la lingua che dovrà essere utilizzata dall'assistente", + "Setting for creativity/consistency of the models response. (Leave at default if you are not sure!=": "Impostazione per creatività/coerenza della risposta dei modelli. (Lascia il valore predefinito se non sei sicuro!=", + "Settings": "Impostazioni", + "Sort": "Ordinare", + "Temperature": "Temperatura", + "The datapoint that contains the result of your function call (Has to be fulfilled in 60 Seconds!)": "Il punto dati che contiene il risultato della chiamata di funzione (deve essere soddisfatto in 60 secondi!)", + "The datapoint that starts the request for the function": "Il punto dati che avvia la richiesta per la funzione", "URL for Inference Server": "URL per il server di inferenza", - "API Token for Inference Server": "Token API per il server di inferenza" + "When activated the internal thought process of the assistant will be written to the response datapoint": "Una volta attivato, il processo di pensiero interno dell'assistente verrà scritto nel datapoint della risposta", + "Which Model should be used": "Quale modello dovrebbe essere utilizzato", + "You can use your custom or self hosted inference server to run open source models. The server needs to follow the rest api standards used by many providers, see examples below. Please make sure to add your used models by name to the table below.": "Puoi utilizzare il tuo server di inferenza personalizzato o ospitato autonomamente per eseguire modelli open source. Il server deve seguire gli altri standard API utilizzati da molti provider, vedere gli esempi di seguito. Assicurati di aggiungere i modelli usati per nome alla tabella seguente." } diff --git a/admin/i18n/nl/translations.json b/admin/i18n/nl/translations.json index 1c8fd79..e72aa0c 100644 --- a/admin/i18n/nl/translations.json +++ b/admin/i18n/nl/translations.json @@ -1,67 +1,68 @@ { + "A descriptive name for your function": "Een beschrijvende naam voor uw functie", + "API Token": "API-token", + "API Token for Inference Server": "API-token voor inferentieserver", + "Active": "Actief", "Assistant Settings": "Assistent-instellingen", - "Give your personal assistant a name and describe its personality. Choose a model that should be used for your assistant.": "Geef uw persoonlijke assistent een naam en beschrijf zijn persoonlijkheid. Kies een model dat voor uw assistent moet worden gebruikt.", - "Name": "Naam", - "Name for the Assistant": "Naam voor de assistent", - "Model": "Model", - "Which Model should be used": "Welk model moet worden gebruikt", - "Personality": "Persoonlijkheid", + "Assistant can use Object": "De Assistent kan Object gebruiken", + "Assistant can use this function": "De Assistent kan deze functie gebruiken", + "Custom functions for assistant": "Aangepaste functies voor assistent", + "Datapoint (Request)": "Datapunt (aanvraag)", + "Datapoint (Result)": "Datapunt (resultaat)", + "Debug / Chain-of-Thought Output": "Debug / Chain-of-Thought-uitvoer", + "Define custom functions for the assistant. Make sure to add a good description for your functions so the assistant knows when to call your function. Each function needs a datapoint that starts the process and another datapoint that contains the result of your function.": "Definieer aangepaste functies voor de assistent. Zorg ervoor dat u een goede omschrijving van uw functies toevoegt, zodat de assistent weet wanneer hij uw functie moet oproepen. Elke functie heeft een datapunt nodig dat het proces start, en een ander datapunt dat het resultaat van uw functie bevat.", "Describe the personality of your assistant": "Beschrijf de persoonlijkheid van uw assistent", - "Friendly and helpful": "Vriendelijk en behulpzaam", - "Language": "Taal", - "Select the language that should be used by the assistant": "Selecteer de taal die door de assistent moet worden gebruikt", + "Describe what your function does and how the data for the request should look. This is important for the assistant to understand your function.": "Beschrijf wat uw functie doet en hoe de gegevens voor het verzoek eruit moeten zien. Dit is belangrijk voor de assistent om uw functie te begrijpen.", + "Description": "Beschrijving", + "Do you really want to import objects from enum.rooms? Existing objects will be reset!": "Wilt u echt objecten uit enum.rooms importeren? Bestaande objecten worden gereset!", + "ERROR: column 'Model' must contain unique text": "FOUT: kolom 'Model' moet unieke tekst bevatten", "English": "Engels", + "Friendly and helpful": "Vriendelijk en behulpzaam", + "Functions": "Functies", "German": "Duits", - "Debug / Chain-of-Thought Output": "Debug / Chain-of-Thought-uitvoer", - "When activated the internal thought process of the assistant will be written to the response datapoint": "Wanneer geactiveerd, wordt het interne denkproces van de assistent naar het responsdatapunt geschreven", - "Model Settings": "Modelinstellingen", - "Select how many messages should be included for context retention. Temperature defines creativity/randomness of output from 0-1 where 0 is the most predictable output. Set how many tokens should be generated max for assistant responses.": "Selecteer hoeveel berichten moeten worden opgenomen voor contextbehoud. Temperatuur definieert de creativiteit/willekeurigheid van de output van 0-1, waarbij 0 de meest voorspelbare output is. Stel in hoeveel tokens er maximaal mogen worden gegenereerd voor assistent-reacties.", - "Message History (Chat Mode)": "Berichtgeschiedenis (chatmodus)", + "Give your personal assistant a name and describe its personality. Choose a model that should be used for your assistant.": "Geef uw persoonlijke assistent een naam en beschrijf zijn persoonlijkheid. Kies een model dat voor uw assistent moet worden gebruikt.", + "How long to wait between retries": "Hoe lang er moet worden gewacht tussen nieuwe pogingen", + "How many times should we retry if request to model fails": "Hoe vaak moeten we het opnieuw proberen als het verzoek om te modelleren mislukt", "If greater 0 previous messages will be included in the request so the tool will stay in context": "Als er meer 0 zijn, worden eerdere berichten in het verzoek opgenomen, zodat de tool in context blijft", - "Temperature": "Temperatuur", - "Setting for creativity/consistency of the models response. (Leave at default if you are not sure!=": "Instelling voor creativiteit/consistentie van de reactie van het model. (Laat de standaardwaarde staan ​​als u het niet zeker weet!=", - "Max. Tokens": "Max. Tokens", + "Import objects from enum.rooms": "Importeer objecten uit enum.rooms", + "Language": "Taal", "Limit the response of the tool to your desired amount of tokens.": "Beperk de reactie van de tool tot het door u gewenste aantal tokens.", - "Request Settings": "Instellingen opvragen", - "Select if failed requests to the assistant should be retried and how long to wait between tries.": "Selecteer of mislukte verzoeken aan de assistent opnieuw moeten worden geprobeerd en hoe lang er tussen pogingen moet worden gewacht.", + "Link to LM Studio": "Link naar LM Studio", + "Link to LocalAI": "Link naar LocalAI", "Max. Retries": "Max. Nieuwe pogingen", - "How many times should we retry if request to model fails": "Hoe vaak moeten we het opnieuw proberen als het verzoek om te modelleren mislukt", - "Retry Delay": "Vertraging opnieuw proberen", - "How long to wait between retries": "Hoe lang er moet worden gewacht tussen nieuwe pogingen", + "Max. Tokens": "Max. Tokens", + "Message History (Chat Mode)": "Berichtgeschiedenis (chatmodus)", + "Model": "Model", + "Model Settings": "Modelinstellingen", + "Model is active": "Model is actief", + "Models": "Modellen", + "Name": "Naam", + "Name for the Assistant": "Naam voor de assistent", + "Name of the Model": "Naam van het model", + "Object": "Voorwerp", "Object access for assistant": "Objecttoegang voor assistent", - "Please add the objects you want to use with the assistant. The assistant will be able to read and control these objects. You can use the button to import all states from your configured room sorting. Make sure to only include needed states to save tokens.": "Voeg de objecten toe die u met de assistent wilt gebruiken. De assistent kan deze objecten lezen en besturen. Met de knop kunt u alle statussen uit uw geconfigureerde kamersortering importeren. Zorg ervoor dat u alleen de benodigde statussen opneemt om tokens op te slaan.", - "Import objects from enum.rooms": "Importeer objecten uit enum.rooms", - "Do you really want to import objects from enum.rooms? Existing objects will be reset!": "Wilt u echt objecten uit enum.rooms importeren? Bestaande objecten worden gereset!", "Objects": "Objecten", - "Active": "Actief", - "Assistant can use Object": "De Assistent kan Object gebruiken", - "Sort": "Soort", - "Room or sorting for Object": "Ruimte of sortering voor object", - "Object": "Voorwerp", - "Custom functions for assistant": "Aangepaste functies voor assistent", - "Define custom functions for the assistant. Make sure to add a good description for your functions so the assistant knows when to call your function. Each function needs a datapoint that starts the process and another datapoint that contains the result of your function.": "Definieer aangepaste functies voor de assistent. Zorg ervoor dat u een goede omschrijving van uw functies toevoegt, zodat de assistent weet wanneer hij uw functie moet oproepen. Elke functie heeft een datapunt nodig dat het proces start, en een ander datapunt dat het resultaat van uw functie bevat.", - "Functions": "Functies", - "Assistant can use this function": "De Assistent kan deze functie gebruiken", - "A descriptive name for your function": "Een beschrijvende naam voor uw functie", - "Description": "Beschrijving", - "Describe what your function does and how the data for the request should look. This is important for the assistant to understand your function.": "Beschrijf wat uw functie doet en hoe de gegevens voor het verzoek eruit moeten zien. Dit is belangrijk voor de assistent om uw functie te begrijpen.", - "Datapoint (Request)": "Datapunt (aanvraag)", - "The datapoint that starts the request for the function": "Het gegevenspunt waarmee de aanvraag voor de functie wordt gestart", - "Datapoint (Result)": "Datapunt (resultaat)", - "The datapoint that contains the result of your function call (Has to be fulfilled in 60 Seconds!)": "Het datapunt dat het resultaat van uw functieaanroep bevat (moet binnen 60 seconden worden vervuld!)", + "Personality": "Persoonlijkheid", + "Please add the objects you want to use with the assistant. The assistant will be able to read and control these objects. You can use the button to import all states from your configured room sorting. Make sure to only include needed states to save tokens.": "Voeg de objecten toe die u met de assistent wilt gebruiken. De assistent kan deze objecten lezen en besturen. Met de knop kunt u alle statussen uit uw geconfigureerde kamersortering importeren. Zorg ervoor dat u alleen de benodigde statussen opneemt om tokens op te slaan.", "Please enter your Anthropic API Token to start using models like Opus, Haiku and Sonnet. If there are new models released you can simply add them in the table to start using them with ai assistants.": "Voer uw Anthropic API Token in om modellen als Opus, Haiku en Sonnet te gaan gebruiken. Als er nieuwe modellen zijn uitgebracht, kunt u deze eenvoudig aan de tabel toevoegen om ze met AI-assistenten te gaan gebruiken.", - "Settings": "Instellingen", - "API Token": "API-token", - "ERROR: column 'Model' must contain unique text": "FOUT: kolom 'Model' moet unieke tekst bevatten", - "Models": "Modellen", - "Model is active": "Model is actief", - "Name of the Model": "Naam van het model", + "Please enter your Deepseek API Token to start using the models. If there are new models released you can simply add them in the table to start using them with ai assistants.": "Voer uw Deepseek API -token in om de modellen te gebruiken. Als er nieuwe modellen worden vrijgegeven, kunt u ze eenvoudig aan de tabel toevoegen om ze te gebruiken met AI -assistenten.", "Please enter your OpenAI API Token to start using models like Gpt4, Gpt4-o1, Gpt3-5. If there are new models released you can simply add them in the table to start using them with ai assistants.": "Voer uw OpenAI API-token in om modellen zoals Gpt4, Gpt4-o1, Gpt3-5 te gaan gebruiken. Als er nieuwe modellen zijn uitgebracht, kunt u deze eenvoudig aan de tabel toevoegen om ze met AI-assistenten te gaan gebruiken.", - "Please enter your Perplexity API Token to start using the models. If there are new models released you can simply add them in the table to start using them with ai assistants.": "Voer uw Perplexity API Token in om de modellen te gaan gebruiken. Als er nieuwe modellen zijn uitgebracht, kunt u deze eenvoudig aan de tabel toevoegen om ze met AI-assistenten te gaan gebruiken.", "Please enter your Openrouter API Token to start using the models. If there are new models released you can simply add them in the table to start using them with ai assistants.": "Voer uw Openrouter API Token in om de modellen te gaan gebruiken. Als er nieuwe modellen zijn uitgebracht, kunt u deze eenvoudig aan de tabel toevoegen om ze met AI-assistenten te gaan gebruiken.", - "You can use your custom or self hosted inference server to run open source models. The server needs to follow the rest api standards used by many providers, see examples below. Please make sure to add your used models by name to the table below.": "U kunt uw aangepaste of zelfgehoste inferentieserver gebruiken om open source-modellen uit te voeren. De server moet de overige API-standaarden volgen die door veel providers worden gebruikt, zie onderstaande voorbeelden. Zorg ervoor dat u uw gebruikte modellen op naam toevoegt aan de onderstaande tabel.", - "Link to LM Studio": "Link naar LM Studio", - "Link to LocalAI": "Link naar LocalAI", + "Please enter your Perplexity API Token to start using the models. If there are new models released you can simply add them in the table to start using them with ai assistants.": "Voer uw Perplexity API Token in om de modellen te gaan gebruiken. Als er nieuwe modellen zijn uitgebracht, kunt u deze eenvoudig aan de tabel toevoegen om ze met AI-assistenten te gaan gebruiken.", + "Request Settings": "Instellingen opvragen", + "Retry Delay": "Vertraging opnieuw proberen", + "Room or sorting for Object": "Ruimte of sortering voor object", + "Select how many messages should be included for context retention. Temperature defines creativity/randomness of output from 0-1 where 0 is the most predictable output. Set how many tokens should be generated max for assistant responses.": "Selecteer hoeveel berichten moeten worden opgenomen voor contextbehoud. Temperatuur definieert de creativiteit/willekeurigheid van de output van 0-1, waarbij 0 de meest voorspelbare output is. Stel in hoeveel tokens er maximaal mogen worden gegenereerd voor assistent-reacties.", + "Select if failed requests to the assistant should be retried and how long to wait between tries.": "Selecteer of mislukte verzoeken aan de assistent opnieuw moeten worden geprobeerd en hoe lang er tussen pogingen moet worden gewacht.", + "Select the language that should be used by the assistant": "Selecteer de taal die door de assistent moet worden gebruikt", + "Setting for creativity/consistency of the models response. (Leave at default if you are not sure!=": "Instelling voor creativiteit/consistentie van de reactie van het model. (Laat de standaardwaarde staan ​​als u het niet zeker weet!=", + "Settings": "Instellingen", + "Sort": "Soort", + "Temperature": "Temperatuur", + "The datapoint that contains the result of your function call (Has to be fulfilled in 60 Seconds!)": "Het datapunt dat het resultaat van uw functieaanroep bevat (moet binnen 60 seconden worden vervuld!)", + "The datapoint that starts the request for the function": "Het gegevenspunt waarmee de aanvraag voor de functie wordt gestart", "URL for Inference Server": "URL voor inferentieserver", - "API Token for Inference Server": "API-token voor inferentieserver" + "When activated the internal thought process of the assistant will be written to the response datapoint": "Wanneer geactiveerd, wordt het interne denkproces van de assistent naar het responsdatapunt geschreven", + "Which Model should be used": "Welk model moet worden gebruikt", + "You can use your custom or self hosted inference server to run open source models. The server needs to follow the rest api standards used by many providers, see examples below. Please make sure to add your used models by name to the table below.": "U kunt uw aangepaste of zelfgehoste inferentieserver gebruiken om open source-modellen uit te voeren. De server moet de overige API-standaarden volgen die door veel providers worden gebruikt, zie onderstaande voorbeelden. Zorg ervoor dat u uw gebruikte modellen op naam toevoegt aan de onderstaande tabel." } diff --git a/admin/i18n/pl/translations.json b/admin/i18n/pl/translations.json index 980c77c..7d03237 100644 --- a/admin/i18n/pl/translations.json +++ b/admin/i18n/pl/translations.json @@ -1,67 +1,68 @@ { + "A descriptive name for your function": "Opisowa nazwa funkcji", + "API Token": "Token API", + "API Token for Inference Server": "Token API dla serwera wnioskowania", + "Active": "Aktywny", "Assistant Settings": "Ustawienia Asystenta", - "Give your personal assistant a name and describe its personality. Choose a model that should be used for your assistant.": "Nadaj swojemu osobistemu asystentowi imię i opisz jego osobowość. Wybierz model, który ma być zastosowany dla Twojego asystenta.", - "Name": "Nazwa", - "Name for the Assistant": "Imię dla Asystenta", - "Model": "Model", - "Which Model should be used": "Który model należy zastosować", - "Personality": "Osobowość", + "Assistant can use Object": "Asystent może używać obiektu", + "Assistant can use this function": "Asystent może korzystać z tej funkcji", + "Custom functions for assistant": "Niestandardowe funkcje asystenta", + "Datapoint (Request)": "Punkt danych (żądanie)", + "Datapoint (Result)": "Punkt danych (wynik)", + "Debug / Chain-of-Thought Output": "Debugowanie/wyjście łańcucha myślowego", + "Define custom functions for the assistant. Make sure to add a good description for your functions so the assistant knows when to call your function. Each function needs a datapoint that starts the process and another datapoint that contains the result of your function.": "Zdefiniuj niestandardowe funkcje asystenta. Pamiętaj, aby dodać dobry opis swoich funkcji, aby asystent wiedział, kiedy wywołać Twoją funkcję. Każda funkcja potrzebuje punktu danych rozpoczynającego proces i innego punktu danych zawierającego wynik funkcji.", "Describe the personality of your assistant": "Opisz osobowość swojego asystenta", - "Friendly and helpful": "Przyjazny i pomocny", - "Language": "Język", - "Select the language that should be used by the assistant": "Wybierz język, jakim ma się posługiwać asystent", + "Describe what your function does and how the data for the request should look. This is important for the assistant to understand your function.": "Opisz, co robi Twoja funkcja i jak powinny wyglądać dane dla żądania. Jest to ważne, aby asystent rozumiał Twoją funkcję.", + "Description": "Opis", + "Do you really want to import objects from enum.rooms? Existing objects will be reset!": "Czy na pewno chcesz importować obiekty z enum.rooms? Istniejące obiekty zostaną zresetowane!", + "ERROR: column 'Model' must contain unique text": "BŁĄD: kolumna „Model” musi zawierać unikalny tekst", "English": "angielski", + "Friendly and helpful": "Przyjazny i pomocny", + "Functions": "Funkcje", "German": "niemiecki", - "Debug / Chain-of-Thought Output": "Debugowanie/wyjście łańcucha myślowego", - "When activated the internal thought process of the assistant will be written to the response datapoint": "Po aktywacji wewnętrzny proces myślowy asystenta zostanie zapisany w punkcie danych odpowiedzi", - "Model Settings": "Ustawienia modelu", - "Select how many messages should be included for context retention. Temperature defines creativity/randomness of output from 0-1 where 0 is the most predictable output. Set how many tokens should be generated max for assistant responses.": "Wybierz, ile wiadomości ma zostać uwzględnionych w celu zachowania kontekstu. Temperatura określa kreatywność/losowość wyjścia w zakresie od 0 do 1, gdzie 0 jest najbardziej przewidywalnym wyjściem. Ustaw maksymalną liczbę tokenów generowanych dla odpowiedzi asystenta.", - "Message History (Chat Mode)": "Historia wiadomości (tryb czatu)", + "Give your personal assistant a name and describe its personality. Choose a model that should be used for your assistant.": "Nadaj swojemu osobistemu asystentowi imię i opisz jego osobowość. Wybierz model, który ma być zastosowany dla Twojego asystenta.", + "How long to wait between retries": "Jak długo należy czekać między ponownymi próbami", + "How many times should we retry if request to model fails": "Ile razy powinniśmy ponawiać próbę, jeśli żądanie modelu nie powiedzie się", "If greater 0 previous messages will be included in the request so the tool will stay in context": "Jeśli więcej niż 0, poprzednie wiadomości zostaną uwzględnione w żądaniu, więc narzędzie pozostanie w kontekście", - "Temperature": "Temperatura", - "Setting for creativity/consistency of the models response. (Leave at default if you are not sure!=": "Ustawienie kreatywności/spójności reakcji modeli. (Pozostaw ustawienie domyślne, jeśli nie jesteś pewien!=", - "Max. Tokens": "Maks. Żetony", + "Import objects from enum.rooms": "Importuj obiekty z enum.rooms", + "Language": "Język", "Limit the response of the tool to your desired amount of tokens.": "Ogranicz reakcję narzędzia do żądanej liczby tokenów.", - "Request Settings": "Ustawienia żądania", - "Select if failed requests to the assistant should be retried and how long to wait between tries.": "Wybierz, czy należy ponawiać nieudane żądania kierowane do asystenta i jak długo należy czekać między próbami.", + "Link to LM Studio": "Link do LM Studio", + "Link to LocalAI": "Link do LocalAI", "Max. Retries": "Maks. Ponowne próby", - "How many times should we retry if request to model fails": "Ile razy powinniśmy ponawiać próbę, jeśli żądanie modelu nie powiedzie się", - "Retry Delay": "Opóźnienie ponownej próby", - "How long to wait between retries": "Jak długo należy czekać między ponownymi próbami", + "Max. Tokens": "Maks. Żetony", + "Message History (Chat Mode)": "Historia wiadomości (tryb czatu)", + "Model": "Model", + "Model Settings": "Ustawienia modelu", + "Model is active": "Model jest aktywny", + "Models": "Modele", + "Name": "Nazwa", + "Name for the Assistant": "Imię dla Asystenta", + "Name of the Model": "Nazwa modelu", + "Object": "Obiekt", "Object access for assistant": "Dostęp do obiektu dla asystenta", - "Please add the objects you want to use with the assistant. The assistant will be able to read and control these objects. You can use the button to import all states from your configured room sorting. Make sure to only include needed states to save tokens.": "Dodaj obiekty, których chcesz używać z asystentem. Asystent będzie mógł czytać i sterować tymi obiektami. Za pomocą przycisku możesz zaimportować wszystkie stany ze skonfigurowanego sortowania pomieszczeń. Pamiętaj, aby uwzględnić tylko stany potrzebne do zapisania tokenów.", - "Import objects from enum.rooms": "Importuj obiekty z enum.rooms", - "Do you really want to import objects from enum.rooms? Existing objects will be reset!": "Czy na pewno chcesz importować obiekty z enum.rooms? Istniejące obiekty zostaną zresetowane!", "Objects": "Obiekty", - "Active": "Aktywny", - "Assistant can use Object": "Asystent może używać obiektu", - "Sort": "Sortować", - "Room or sorting for Object": "Pokój lub sortowanie dla obiektu", - "Object": "Obiekt", - "Custom functions for assistant": "Niestandardowe funkcje asystenta", - "Define custom functions for the assistant. Make sure to add a good description for your functions so the assistant knows when to call your function. Each function needs a datapoint that starts the process and another datapoint that contains the result of your function.": "Zdefiniuj niestandardowe funkcje asystenta. Pamiętaj, aby dodać dobry opis swoich funkcji, aby asystent wiedział, kiedy wywołać Twoją funkcję. Każda funkcja potrzebuje punktu danych rozpoczynającego proces i innego punktu danych zawierającego wynik funkcji.", - "Functions": "Funkcje", - "Assistant can use this function": "Asystent może korzystać z tej funkcji", - "A descriptive name for your function": "Opisowa nazwa funkcji", - "Description": "Opis", - "Describe what your function does and how the data for the request should look. This is important for the assistant to understand your function.": "Opisz, co robi Twoja funkcja i jak powinny wyglądać dane dla żądania. Jest to ważne, aby asystent rozumiał Twoją funkcję.", - "Datapoint (Request)": "Punkt danych (żądanie)", - "The datapoint that starts the request for the function": "Punkt danych, który rozpoczyna żądanie funkcji", - "Datapoint (Result)": "Punkt danych (wynik)", - "The datapoint that contains the result of your function call (Has to be fulfilled in 60 Seconds!)": "Punkt danych zawierający wynik wywołania funkcji (musi zostać wypełniony w 60 sekund!)", + "Personality": "Osobowość", + "Please add the objects you want to use with the assistant. The assistant will be able to read and control these objects. You can use the button to import all states from your configured room sorting. Make sure to only include needed states to save tokens.": "Dodaj obiekty, których chcesz używać z asystentem. Asystent będzie mógł czytać i sterować tymi obiektami. Za pomocą przycisku możesz zaimportować wszystkie stany ze skonfigurowanego sortowania pomieszczeń. Pamiętaj, aby uwzględnić tylko stany potrzebne do zapisania tokenów.", "Please enter your Anthropic API Token to start using models like Opus, Haiku and Sonnet. If there are new models released you can simply add them in the table to start using them with ai assistants.": "Wprowadź swój token API Anthropic, aby rozpocząć korzystanie z modeli takich jak Opus, Haiku i Sonnet. Jeśli zostaną wydane nowe modele, możesz po prostu dodać je do tabeli, aby rozpocząć korzystanie z nich z asystentami AI.", - "Settings": "Ustawienia", - "API Token": "Token API", - "ERROR: column 'Model' must contain unique text": "BŁĄD: kolumna „Model” musi zawierać unikalny tekst", - "Models": "Modele", - "Model is active": "Model jest aktywny", - "Name of the Model": "Nazwa modelu", + "Please enter your Deepseek API Token to start using the models. If there are new models released you can simply add them in the table to start using them with ai assistants.": "Wprowadź swój token API Deepseek, aby zacząć korzystać z modeli. Jeśli są wydane nowe modele, możesz po prostu dodać je w tabeli, aby zacząć je używać z asystentami AI.", "Please enter your OpenAI API Token to start using models like Gpt4, Gpt4-o1, Gpt3-5. If there are new models released you can simply add them in the table to start using them with ai assistants.": "Wprowadź swój token API OpenAI, aby rozpocząć korzystanie z modeli takich jak Gpt4, Gpt4-o1, Gpt3-5. Jeśli zostaną wydane nowe modele, możesz po prostu dodać je do tabeli, aby rozpocząć korzystanie z nich z asystentami AI.", - "Please enter your Perplexity API Token to start using the models. If there are new models released you can simply add them in the table to start using them with ai assistants.": "Aby rozpocząć korzystanie z modeli, wprowadź swój token API Perplexity. Jeśli zostaną wydane nowe modele, możesz po prostu dodać je do tabeli, aby rozpocząć korzystanie z nich z asystentami AI.", "Please enter your Openrouter API Token to start using the models. If there are new models released you can simply add them in the table to start using them with ai assistants.": "Aby rozpocząć korzystanie z modeli, wprowadź swój token API Openrouter. Jeśli zostaną wydane nowe modele, możesz po prostu dodać je do tabeli, aby rozpocząć korzystanie z nich z asystentami AI.", - "You can use your custom or self hosted inference server to run open source models. The server needs to follow the rest api standards used by many providers, see examples below. Please make sure to add your used models by name to the table below.": "Do uruchamiania modeli open source można używać niestandardowego lub hostowanego samodzielnie serwera wnioskowania. Serwer musi być zgodny z pozostałymi standardami API używanymi przez wielu dostawców, zobacz przykłady poniżej. Pamiętaj, aby dodać używane modele według nazwy do poniższej tabeli.", - "Link to LM Studio": "Link do LM Studio", - "Link to LocalAI": "Link do LocalAI", + "Please enter your Perplexity API Token to start using the models. If there are new models released you can simply add them in the table to start using them with ai assistants.": "Aby rozpocząć korzystanie z modeli, wprowadź swój token API Perplexity. Jeśli zostaną wydane nowe modele, możesz po prostu dodać je do tabeli, aby rozpocząć korzystanie z nich z asystentami AI.", + "Request Settings": "Ustawienia żądania", + "Retry Delay": "Opóźnienie ponownej próby", + "Room or sorting for Object": "Pokój lub sortowanie dla obiektu", + "Select how many messages should be included for context retention. Temperature defines creativity/randomness of output from 0-1 where 0 is the most predictable output. Set how many tokens should be generated max for assistant responses.": "Wybierz, ile wiadomości ma zostać uwzględnionych w celu zachowania kontekstu. Temperatura określa kreatywność/losowość wyjścia w zakresie od 0 do 1, gdzie 0 jest najbardziej przewidywalnym wyjściem. Ustaw maksymalną liczbę tokenów generowanych dla odpowiedzi asystenta.", + "Select if failed requests to the assistant should be retried and how long to wait between tries.": "Wybierz, czy należy ponawiać nieudane żądania kierowane do asystenta i jak długo należy czekać między próbami.", + "Select the language that should be used by the assistant": "Wybierz język, jakim ma się posługiwać asystent", + "Setting for creativity/consistency of the models response. (Leave at default if you are not sure!=": "Ustawienie kreatywności/spójności reakcji modeli. (Pozostaw ustawienie domyślne, jeśli nie jesteś pewien!=", + "Settings": "Ustawienia", + "Sort": "Sortować", + "Temperature": "Temperatura", + "The datapoint that contains the result of your function call (Has to be fulfilled in 60 Seconds!)": "Punkt danych zawierający wynik wywołania funkcji (musi zostać wypełniony w 60 sekund!)", + "The datapoint that starts the request for the function": "Punkt danych, który rozpoczyna żądanie funkcji", "URL for Inference Server": "Adres URL serwera wnioskowania", - "API Token for Inference Server": "Token API dla serwera wnioskowania" + "When activated the internal thought process of the assistant will be written to the response datapoint": "Po aktywacji wewnętrzny proces myślowy asystenta zostanie zapisany w punkcie danych odpowiedzi", + "Which Model should be used": "Który model należy zastosować", + "You can use your custom or self hosted inference server to run open source models. The server needs to follow the rest api standards used by many providers, see examples below. Please make sure to add your used models by name to the table below.": "Do uruchamiania modeli open source można używać niestandardowego lub hostowanego samodzielnie serwera wnioskowania. Serwer musi być zgodny z pozostałymi standardami API używanymi przez wielu dostawców, zobacz przykłady poniżej. Pamiętaj, aby dodać używane modele według nazwy do poniższej tabeli." } diff --git a/admin/i18n/pt/translations.json b/admin/i18n/pt/translations.json index e1e5cce..fc1aac2 100644 --- a/admin/i18n/pt/translations.json +++ b/admin/i18n/pt/translations.json @@ -1,67 +1,68 @@ { + "A descriptive name for your function": "Um nome descritivo para sua função", + "API Token": "Token de API", + "API Token for Inference Server": "Token de API para servidor de inferência", + "Active": "Ativo", "Assistant Settings": "Configurações do assistente", - "Give your personal assistant a name and describe its personality. Choose a model that should be used for your assistant.": "Dê um nome ao seu assistente pessoal e descreva sua personalidade. Escolha um modelo que deve ser usado para seu assistente.", - "Name": "Nome", - "Name for the Assistant": "Nome do Assistente", - "Model": "Modelo", - "Which Model should be used": "Qual modelo deve ser usado", - "Personality": "Personalidade", + "Assistant can use Object": "Assistente pode usar objeto", + "Assistant can use this function": "O assistente pode usar esta função", + "Custom functions for assistant": "Funções personalizadas para assistente", + "Datapoint (Request)": "Ponto de dados (solicitação)", + "Datapoint (Result)": "Ponto de dados (resultado)", + "Debug / Chain-of-Thought Output": "Saída de depuração/cadeia de pensamento", + "Define custom functions for the assistant. Make sure to add a good description for your functions so the assistant knows when to call your function. Each function needs a datapoint that starts the process and another datapoint that contains the result of your function.": "Defina funções personalizadas para o assistente. Certifique-se de adicionar uma boa descrição para suas funções para que o assistente saiba quando chamá-la. Cada função precisa de um ponto de dados que inicie o processo e outro ponto de dados que contenha o resultado da sua função.", "Describe the personality of your assistant": "Descreva a personalidade do seu assistente", - "Friendly and helpful": "Amigável e prestativo", - "Language": "Linguagem", - "Select the language that should be used by the assistant": "Selecione o idioma que deve ser usado pelo assistente", + "Describe what your function does and how the data for the request should look. This is important for the assistant to understand your function.": "Descreva o que sua função faz e como devem ser os dados da solicitação. Isso é importante para que o assistente entenda sua função.", + "Description": "Descrição", + "Do you really want to import objects from enum.rooms? Existing objects will be reset!": "Você realmente deseja importar objetos de enum.rooms? Os objetos existentes serão redefinidos!", + "ERROR: column 'Model' must contain unique text": "ERRO: a coluna 'Modelo' deve conter texto exclusivo", "English": "Inglês", + "Friendly and helpful": "Amigável e prestativo", + "Functions": "Funções", "German": "Alemão", - "Debug / Chain-of-Thought Output": "Saída de depuração/cadeia de pensamento", - "When activated the internal thought process of the assistant will be written to the response datapoint": "Quando ativado, o processo de pensamento interno do assistente será gravado no ponto de dados de resposta", - "Model Settings": "Configurações do modelo", - "Select how many messages should be included for context retention. Temperature defines creativity/randomness of output from 0-1 where 0 is the most predictable output. Set how many tokens should be generated max for assistant responses.": "Selecione quantas mensagens devem ser incluídas para retenção de contexto. A temperatura define a criatividade/aleatoriedade da saída de 0 a 1, onde 0 é a saída mais previsível. Defina quantos tokens devem ser gerados no máximo para respostas do assistente.", - "Message History (Chat Mode)": "Histórico de mensagens (modo de bate-papo)", + "Give your personal assistant a name and describe its personality. Choose a model that should be used for your assistant.": "Dê um nome ao seu assistente pessoal e descreva sua personalidade. Escolha um modelo que deve ser usado para seu assistente.", + "How long to wait between retries": "Quanto tempo esperar entre novas tentativas", + "How many times should we retry if request to model fails": "Quantas vezes devemos tentar novamente se a solicitação do modelo falhar", "If greater 0 previous messages will be included in the request so the tool will stay in context": "Se for maior que 0, as mensagens anteriores serão incluídas na solicitação para que a ferramenta permaneça no contexto", - "Temperature": "Temperatura", - "Setting for creativity/consistency of the models response. (Leave at default if you are not sure!=": "Configuração para criatividade/consistência da resposta dos modelos. (Deixe como padrão se não tiver certeza!=", - "Max. Tokens": "Máx. Fichas", + "Import objects from enum.rooms": "Importar objetos de enum.rooms", + "Language": "Linguagem", "Limit the response of the tool to your desired amount of tokens.": "Limite a resposta da ferramenta à quantidade desejada de tokens.", - "Request Settings": "Configurações de solicitação", - "Select if failed requests to the assistant should be retried and how long to wait between tries.": "Selecione se as solicitações com falha ao assistente devem ser repetidas e quanto tempo esperar entre as tentativas.", + "Link to LM Studio": "Link para o LM Studio", + "Link to LocalAI": "Link para LocalAI", "Max. Retries": "Máx. Novas tentativas", - "How many times should we retry if request to model fails": "Quantas vezes devemos tentar novamente se a solicitação do modelo falhar", - "Retry Delay": "Atraso na nova tentativa", - "How long to wait between retries": "Quanto tempo esperar entre novas tentativas", + "Max. Tokens": "Máx. Fichas", + "Message History (Chat Mode)": "Histórico de mensagens (modo de bate-papo)", + "Model": "Modelo", + "Model Settings": "Configurações do modelo", + "Model is active": "O modelo está ativo", + "Models": "Modelos", + "Name": "Nome", + "Name for the Assistant": "Nome do Assistente", + "Name of the Model": "Nome do modelo", + "Object": "Objeto", "Object access for assistant": "Acesso a objetos para assistente", - "Please add the objects you want to use with the assistant. The assistant will be able to read and control these objects. You can use the button to import all states from your configured room sorting. Make sure to only include needed states to save tokens.": "Adicione os objetos que deseja usar com o assistente. O assistente poderá ler e controlar esses objetos. Você pode usar o botão para importar todos os estados da classificação de salas configurada. Certifique-se de incluir apenas os estados necessários para salvar tokens.", - "Import objects from enum.rooms": "Importar objetos de enum.rooms", - "Do you really want to import objects from enum.rooms? Existing objects will be reset!": "Você realmente deseja importar objetos de enum.rooms? Os objetos existentes serão redefinidos!", "Objects": "Objetos", - "Active": "Ativo", - "Assistant can use Object": "Assistente pode usar objeto", - "Sort": "Organizar", - "Room or sorting for Object": "Sala ou classificação por objeto", - "Object": "Objeto", - "Custom functions for assistant": "Funções personalizadas para assistente", - "Define custom functions for the assistant. Make sure to add a good description for your functions so the assistant knows when to call your function. Each function needs a datapoint that starts the process and another datapoint that contains the result of your function.": "Defina funções personalizadas para o assistente. Certifique-se de adicionar uma boa descrição para suas funções para que o assistente saiba quando chamá-la. Cada função precisa de um ponto de dados que inicie o processo e outro ponto de dados que contenha o resultado da sua função.", - "Functions": "Funções", - "Assistant can use this function": "O assistente pode usar esta função", - "A descriptive name for your function": "Um nome descritivo para sua função", - "Description": "Descrição", - "Describe what your function does and how the data for the request should look. This is important for the assistant to understand your function.": "Descreva o que sua função faz e como devem ser os dados da solicitação. Isso é importante para que o assistente entenda sua função.", - "Datapoint (Request)": "Ponto de dados (solicitação)", - "The datapoint that starts the request for the function": "O ponto de dados que inicia a solicitação da função", - "Datapoint (Result)": "Ponto de dados (resultado)", - "The datapoint that contains the result of your function call (Has to be fulfilled in 60 Seconds!)": "O ponto de dados que contém o resultado da sua chamada de função (deve ser atendido em 60 segundos!)", + "Personality": "Personalidade", + "Please add the objects you want to use with the assistant. The assistant will be able to read and control these objects. You can use the button to import all states from your configured room sorting. Make sure to only include needed states to save tokens.": "Adicione os objetos que deseja usar com o assistente. O assistente poderá ler e controlar esses objetos. Você pode usar o botão para importar todos os estados da classificação de salas configurada. Certifique-se de incluir apenas os estados necessários para salvar tokens.", "Please enter your Anthropic API Token to start using models like Opus, Haiku and Sonnet. If there are new models released you can simply add them in the table to start using them with ai assistants.": "Por favor, insira seu token API Anthropic para começar a usar modelos como Opus, Haiku e Sonnet. Se houver novos modelos lançados, você pode simplesmente adicioná-los na tabela para começar a usá-los com assistentes de IA.", - "Settings": "Configurações", - "API Token": "Token de API", - "ERROR: column 'Model' must contain unique text": "ERRO: a coluna 'Modelo' deve conter texto exclusivo", - "Models": "Modelos", - "Model is active": "O modelo está ativo", - "Name of the Model": "Nome do modelo", + "Please enter your Deepseek API Token to start using the models. If there are new models released you can simply add them in the table to start using them with ai assistants.": "Por favor, insira seu token de API Deepseek para começar a usar os modelos. Se houver novos modelos lançados, você pode simplesmente adicioná -los na tabela para começar a usá -los com os assistentes de IA.", "Please enter your OpenAI API Token to start using models like Gpt4, Gpt4-o1, Gpt3-5. If there are new models released you can simply add them in the table to start using them with ai assistants.": "Insira seu token de API OpenAI para começar a usar modelos como Gpt4, Gpt4-o1, Gpt3-5. Se houver novos modelos lançados, você pode simplesmente adicioná-los na tabela para começar a usá-los com assistentes de IA.", - "Please enter your Perplexity API Token to start using the models. If there are new models released you can simply add them in the table to start using them with ai assistants.": "Insira seu token da API Perplexity para começar a usar os modelos. Se houver novos modelos lançados, você pode simplesmente adicioná-los na tabela para começar a usá-los com assistentes de IA.", "Please enter your Openrouter API Token to start using the models. If there are new models released you can simply add them in the table to start using them with ai assistants.": "Insira seu token de API do Openrouter para começar a usar os modelos. Se houver novos modelos lançados, você pode simplesmente adicioná-los na tabela para começar a usá-los com assistentes de IA.", - "You can use your custom or self hosted inference server to run open source models. The server needs to follow the rest api standards used by many providers, see examples below. Please make sure to add your used models by name to the table below.": "Você pode usar seu servidor de inferência personalizado ou auto-hospedado para executar modelos de código aberto. O servidor precisa seguir os demais padrões de API usados ​​por muitos provedores, veja os exemplos abaixo. Certifique-se de adicionar seus modelos usados ​​por nome na tabela abaixo.", - "Link to LM Studio": "Link para o LM Studio", - "Link to LocalAI": "Link para LocalAI", + "Please enter your Perplexity API Token to start using the models. If there are new models released you can simply add them in the table to start using them with ai assistants.": "Insira seu token da API Perplexity para começar a usar os modelos. Se houver novos modelos lançados, você pode simplesmente adicioná-los na tabela para começar a usá-los com assistentes de IA.", + "Request Settings": "Configurações de solicitação", + "Retry Delay": "Atraso na nova tentativa", + "Room or sorting for Object": "Sala ou classificação por objeto", + "Select how many messages should be included for context retention. Temperature defines creativity/randomness of output from 0-1 where 0 is the most predictable output. Set how many tokens should be generated max for assistant responses.": "Selecione quantas mensagens devem ser incluídas para retenção de contexto. A temperatura define a criatividade/aleatoriedade da saída de 0 a 1, onde 0 é a saída mais previsível. Defina quantos tokens devem ser gerados no máximo para respostas do assistente.", + "Select if failed requests to the assistant should be retried and how long to wait between tries.": "Selecione se as solicitações com falha ao assistente devem ser repetidas e quanto tempo esperar entre as tentativas.", + "Select the language that should be used by the assistant": "Selecione o idioma que deve ser usado pelo assistente", + "Setting for creativity/consistency of the models response. (Leave at default if you are not sure!=": "Configuração para criatividade/consistência da resposta dos modelos. (Deixe como padrão se não tiver certeza!=", + "Settings": "Configurações", + "Sort": "Organizar", + "Temperature": "Temperatura", + "The datapoint that contains the result of your function call (Has to be fulfilled in 60 Seconds!)": "O ponto de dados que contém o resultado da sua chamada de função (deve ser atendido em 60 segundos!)", + "The datapoint that starts the request for the function": "O ponto de dados que inicia a solicitação da função", "URL for Inference Server": "URL para servidor de inferência", - "API Token for Inference Server": "Token de API para servidor de inferência" + "When activated the internal thought process of the assistant will be written to the response datapoint": "Quando ativado, o processo de pensamento interno do assistente será gravado no ponto de dados de resposta", + "Which Model should be used": "Qual modelo deve ser usado", + "You can use your custom or self hosted inference server to run open source models. The server needs to follow the rest api standards used by many providers, see examples below. Please make sure to add your used models by name to the table below.": "Você pode usar seu servidor de inferência personalizado ou auto-hospedado para executar modelos de código aberto. O servidor precisa seguir os demais padrões de API usados ​​por muitos provedores, veja os exemplos abaixo. Certifique-se de adicionar seus modelos usados ​​por nome na tabela abaixo." } diff --git a/admin/i18n/ru/translations.json b/admin/i18n/ru/translations.json index 58cc912..9069413 100644 --- a/admin/i18n/ru/translations.json +++ b/admin/i18n/ru/translations.json @@ -1,67 +1,68 @@ { + "A descriptive name for your function": "Описательное имя для вашей функции", + "API Token": "API-токен", + "API Token for Inference Server": "Токен API для сервера вывода", + "Active": "Активный", "Assistant Settings": "Настройки Ассистента", - "Give your personal assistant a name and describe its personality. Choose a model that should be used for your assistant.": "Дайте своему личному помощнику имя и опишите его личность. Выберите модель, которую следует использовать для вашего помощника.", - "Name": "Имя", - "Name for the Assistant": "Имя помощника", - "Model": "Модель", - "Which Model should be used": "Какую модель следует использовать", - "Personality": "Личность", + "Assistant can use Object": "Ассистент может использовать объект", + "Assistant can use this function": "Ассистент может использовать эту функцию", + "Custom functions for assistant": "Пользовательские функции для помощника", + "Datapoint (Request)": "Точка данных (запрос)", + "Datapoint (Result)": "Точка данных (результат)", + "Debug / Chain-of-Thought Output": "Отладка/вывод цепочки мыслей", + "Define custom functions for the assistant. Make sure to add a good description for your functions so the assistant knows when to call your function. Each function needs a datapoint that starts the process and another datapoint that contains the result of your function.": "Определите пользовательские функции для помощника. Обязательно добавьте хорошее описание своих функций, чтобы помощник знал, когда вызывать вашу функцию. Каждой функции нужна точка данных, которая запускает процесс, и другая точка данных, содержащая результат вашей функции.", "Describe the personality of your assistant": "Опишите личность вашего помощника", - "Friendly and helpful": "Дружелюбный и услужливый", - "Language": "Язык", - "Select the language that should be used by the assistant": "Выберите язык, который будет использовать помощник", + "Describe what your function does and how the data for the request should look. This is important for the assistant to understand your function.": "Опишите, что делает ваша функция и как должны выглядеть данные для запроса. Это важно, чтобы помощник понял ваши функции.", + "Description": "Описание", + "Do you really want to import objects from enum.rooms? Existing objects will be reset!": "Вы действительно хотите импортировать объекты из enum.rooms? Существующие объекты будут сброшены!", + "ERROR: column 'Model' must contain unique text": "ОШИБКА: столбец «Модель» должен содержать уникальный текст.", "English": "Английский", + "Friendly and helpful": "Дружелюбный и услужливый", + "Functions": "Функции", "German": "немецкий", - "Debug / Chain-of-Thought Output": "Отладка/вывод цепочки мыслей", - "When activated the internal thought process of the assistant will be written to the response datapoint": "При активации внутренний мыслительный процесс помощника будет записан в точку данных ответа.", - "Model Settings": "Настройки модели", - "Select how many messages should be included for context retention. Temperature defines creativity/randomness of output from 0-1 where 0 is the most predictable output. Set how many tokens should be generated max for assistant responses.": "Выберите, сколько сообщений должно быть включено для сохранения контекста. Температура определяет креативность/случайность результата от 0 до 1, где 0 — наиболее предсказуемый результат. Установите максимальное количество токенов, которое должно быть сгенерировано для ответов помощника.", - "Message History (Chat Mode)": "История сообщений (режим чата)", + "Give your personal assistant a name and describe its personality. Choose a model that should be used for your assistant.": "Дайте своему личному помощнику имя и опишите его личность. Выберите модель, которую следует использовать для вашего помощника.", + "How long to wait between retries": "Как долго ждать между повторными попытками", + "How many times should we retry if request to model fails": "Сколько раз мы должны повторить попытку, если запрос на моделирование не удался", "If greater 0 previous messages will be included in the request so the tool will stay in context": "Если в запрос будет включено больше 0 предыдущих сообщений, инструмент останется в контексте.", - "Temperature": "Температура", - "Setting for creativity/consistency of the models response. (Leave at default if you are not sure!=": "Настройка креативности/последовательности ответов моделей. (Если вы не уверены, оставьте значение по умолчанию!=", - "Max. Tokens": "Макс. Токены", + "Import objects from enum.rooms": "Импортировать объекты из enum.rooms", + "Language": "Язык", "Limit the response of the tool to your desired amount of tokens.": "Ограничьте реакцию инструмента желаемым количеством токенов.", - "Request Settings": "Запросить настройки", - "Select if failed requests to the assistant should be retried and how long to wait between tries.": "Выберите, следует ли повторять неудачные запросы к помощнику и как долго ждать между попытками.", + "Link to LM Studio": "Ссылка на студию LM", + "Link to LocalAI": "Ссылка на LocalAI", "Max. Retries": "Макс. Повторные попытки", - "How many times should we retry if request to model fails": "Сколько раз мы должны повторить попытку, если запрос на моделирование не удался", - "Retry Delay": "Задержка повтора", - "How long to wait between retries": "Как долго ждать между повторными попытками", + "Max. Tokens": "Макс. Токены", + "Message History (Chat Mode)": "История сообщений (режим чата)", + "Model": "Модель", + "Model Settings": "Настройки модели", + "Model is active": "Модель активна", + "Models": "Модели", + "Name": "Имя", + "Name for the Assistant": "Имя помощника", + "Name of the Model": "Название модели", + "Object": "Объект", "Object access for assistant": "Доступ к объекту для помощника", - "Please add the objects you want to use with the assistant. The assistant will be able to read and control these objects. You can use the button to import all states from your configured room sorting. Make sure to only include needed states to save tokens.": "Добавьте объекты, которые хотите использовать, с помощью помощника. Помощник сможет читать и управлять этими объектами. Вы можете использовать кнопку, чтобы импортировать все состояния из настроенной вами сортировки комнат. Обязательно включите только необходимые состояния для сохранения токенов.", - "Import objects from enum.rooms": "Импортировать объекты из enum.rooms", - "Do you really want to import objects from enum.rooms? Existing objects will be reset!": "Вы действительно хотите импортировать объекты из enum.rooms? Существующие объекты будут сброшены!", "Objects": "Объекты", - "Active": "Активный", - "Assistant can use Object": "Ассистент может использовать объект", - "Sort": "Сортировать", - "Room or sorting for Object": "Комната или сортировка объекта", - "Object": "Объект", - "Custom functions for assistant": "Пользовательские функции для помощника", - "Define custom functions for the assistant. Make sure to add a good description for your functions so the assistant knows when to call your function. Each function needs a datapoint that starts the process and another datapoint that contains the result of your function.": "Определите пользовательские функции для помощника. Обязательно добавьте хорошее описание своих функций, чтобы помощник знал, когда вызывать вашу функцию. Каждой функции нужна точка данных, которая запускает процесс, и другая точка данных, содержащая результат вашей функции.", - "Functions": "Функции", - "Assistant can use this function": "Ассистент может использовать эту функцию", - "A descriptive name for your function": "Описательное имя для вашей функции", - "Description": "Описание", - "Describe what your function does and how the data for the request should look. This is important for the assistant to understand your function.": "Опишите, что делает ваша функция и как должны выглядеть данные для запроса. Это важно, чтобы помощник понял ваши функции.", - "Datapoint (Request)": "Точка данных (запрос)", - "The datapoint that starts the request for the function": "Точка данных, которая запускает запрос функции", - "Datapoint (Result)": "Точка данных (результат)", - "The datapoint that contains the result of your function call (Has to be fulfilled in 60 Seconds!)": "Точка данных, содержащая результат вызова функции (должна быть выполнена за 60 секунд!)", + "Personality": "Личность", + "Please add the objects you want to use with the assistant. The assistant will be able to read and control these objects. You can use the button to import all states from your configured room sorting. Make sure to only include needed states to save tokens.": "Добавьте объекты, которые хотите использовать, с помощью помощника. Помощник сможет читать и управлять этими объектами. Вы можете использовать кнопку, чтобы импортировать все состояния из настроенной вами сортировки комнат. Обязательно включите только необходимые состояния для сохранения токенов.", "Please enter your Anthropic API Token to start using models like Opus, Haiku and Sonnet. If there are new models released you can simply add them in the table to start using them with ai assistants.": "Введите свой токен Anthropic API, чтобы начать использовать такие модели, как Opus, Haiku и Sonnet. Если выпущены новые модели, вы можете просто добавить их в таблицу, чтобы начать использовать их с помощниками искусственного интеллекта.", - "Settings": "Настройки", - "API Token": "API-токен", - "ERROR: column 'Model' must contain unique text": "ОШИБКА: столбец «Модель» должен содержать уникальный текст.", - "Models": "Модели", - "Model is active": "Модель активна", - "Name of the Model": "Название модели", + "Please enter your Deepseek API Token to start using the models. If there are new models released you can simply add them in the table to start using them with ai assistants.": "Пожалуйста, введите свой токен DeepSeek API, чтобы начать использовать модели. Если выпущены новые модели, вы можете просто добавить их в таблицу, чтобы начать использовать их с помощниками искусственного интеллекта.", "Please enter your OpenAI API Token to start using models like Gpt4, Gpt4-o1, Gpt3-5. If there are new models released you can simply add them in the table to start using them with ai assistants.": "Введите свой токен API OpenAI, чтобы начать использовать такие модели, как Gpt4, Gpt4-o1, Gpt3-5. Если выпущены новые модели, вы можете просто добавить их в таблицу, чтобы начать использовать их с помощниками искусственного интеллекта.", - "Please enter your Perplexity API Token to start using the models. If there are new models released you can simply add them in the table to start using them with ai assistants.": "Введите свой токен Perplexity API, чтобы начать использовать модели. Если выпущены новые модели, вы можете просто добавить их в таблицу, чтобы начать использовать их с помощниками искусственного интеллекта.", "Please enter your Openrouter API Token to start using the models. If there are new models released you can simply add them in the table to start using them with ai assistants.": "Введите свой токен API Openrouter, чтобы начать использовать модели. Если выпущены новые модели, вы можете просто добавить их в таблицу, чтобы начать использовать их с помощниками искусственного интеллекта.", - "You can use your custom or self hosted inference server to run open source models. The server needs to follow the rest api standards used by many providers, see examples below. Please make sure to add your used models by name to the table below.": "Вы можете использовать собственный или собственный сервер вывода для запуска моделей с открытым исходным кодом. Сервер должен соответствовать остальным стандартам API, используемым многими провайдерами, см. примеры ниже. Обязательно добавьте подержанные модели в таблицу ниже.", - "Link to LM Studio": "Ссылка на студию LM", - "Link to LocalAI": "Ссылка на LocalAI", + "Please enter your Perplexity API Token to start using the models. If there are new models released you can simply add them in the table to start using them with ai assistants.": "Введите свой токен Perplexity API, чтобы начать использовать модели. Если выпущены новые модели, вы можете просто добавить их в таблицу, чтобы начать использовать их с помощниками искусственного интеллекта.", + "Request Settings": "Запросить настройки", + "Retry Delay": "Задержка повтора", + "Room or sorting for Object": "Комната или сортировка объекта", + "Select how many messages should be included for context retention. Temperature defines creativity/randomness of output from 0-1 where 0 is the most predictable output. Set how many tokens should be generated max for assistant responses.": "Выберите, сколько сообщений должно быть включено для сохранения контекста. Температура определяет креативность/случайность результата от 0 до 1, где 0 — наиболее предсказуемый результат. Установите максимальное количество токенов, которое должно быть сгенерировано для ответов помощника.", + "Select if failed requests to the assistant should be retried and how long to wait between tries.": "Выберите, следует ли повторять неудачные запросы к помощнику и как долго ждать между попытками.", + "Select the language that should be used by the assistant": "Выберите язык, который будет использовать помощник", + "Setting for creativity/consistency of the models response. (Leave at default if you are not sure!=": "Настройка креативности/последовательности ответов моделей. (Если вы не уверены, оставьте значение по умолчанию!=", + "Settings": "Настройки", + "Sort": "Сортировать", + "Temperature": "Температура", + "The datapoint that contains the result of your function call (Has to be fulfilled in 60 Seconds!)": "Точка данных, содержащая результат вызова функции (должна быть выполнена за 60 секунд!)", + "The datapoint that starts the request for the function": "Точка данных, которая запускает запрос функции", "URL for Inference Server": "URL-адрес сервера вывода", - "API Token for Inference Server": "Токен API для сервера вывода" + "When activated the internal thought process of the assistant will be written to the response datapoint": "При активации внутренний мыслительный процесс помощника будет записан в точку данных ответа.", + "Which Model should be used": "Какую модель следует использовать", + "You can use your custom or self hosted inference server to run open source models. The server needs to follow the rest api standards used by many providers, see examples below. Please make sure to add your used models by name to the table below.": "Вы можете использовать собственный или собственный сервер вывода для запуска моделей с открытым исходным кодом. Сервер должен соответствовать остальным стандартам API, используемым многими провайдерами, см. примеры ниже. Обязательно добавьте подержанные модели в таблицу ниже." } diff --git a/admin/i18n/uk/translations.json b/admin/i18n/uk/translations.json index 9d2ab76..7791f81 100644 --- a/admin/i18n/uk/translations.json +++ b/admin/i18n/uk/translations.json @@ -1,67 +1,68 @@ { + "A descriptive name for your function": "Описова назва вашої функції", + "API Token": "Маркер API", + "API Token for Inference Server": "Маркер API для сервера висновків", + "Active": "Активний", "Assistant Settings": "Налаштування помічника", - "Give your personal assistant a name and describe its personality. Choose a model that should be used for your assistant.": "Назвіть свого персонального помічника та опишіть його характер. Виберіть модель, яка буде використовуватися для вашого помічника.", - "Name": "Ім'я", - "Name for the Assistant": "Ім'я для помічника", - "Model": "Модель", - "Which Model should be used": "Яку модель слід використовувати", - "Personality": "Особистість", + "Assistant can use Object": "Помічник може використовувати Object", + "Assistant can use this function": "Помічник може використовувати цю функцію", + "Custom functions for assistant": "Спеціальні функції для помічника", + "Datapoint (Request)": "Точка даних (запит)", + "Datapoint (Result)": "Точка даних (результат)", + "Debug / Chain-of-Thought Output": "Вивід налагодження/ланцюжка думок", + "Define custom functions for the assistant. Make sure to add a good description for your functions so the assistant knows when to call your function. Each function needs a datapoint that starts the process and another datapoint that contains the result of your function.": "Визначте спеціальні функції для помічника. Обов’язково додайте хороший опис своїх функцій, щоб помічник знав, коли викликати вашу функцію. Для кожної функції потрібна точка даних, яка запускає процес, і інша точка даних, яка містить результат вашої функції.", "Describe the personality of your assistant": "Опишіть характер свого помічника", - "Friendly and helpful": "Доброзичливий і корисний", - "Language": "Мова", - "Select the language that should be used by the assistant": "Виберіть мову, якою має користуватися помічник", + "Describe what your function does and how the data for the request should look. This is important for the assistant to understand your function.": "Опишіть, що робить ваша функція і як мають виглядати дані для запиту. Це важливо, щоб помічник розумів вашу функцію.", + "Description": "опис", + "Do you really want to import objects from enum.rooms? Existing objects will be reset!": "Ви дійсно хочете імпортувати об’єкти з enum.rooms? Існуючі об'єкти будуть скинуті!", + "ERROR: column 'Model' must contain unique text": "ПОМИЛКА: стовпець «Модель» повинен містити унікальний текст", "English": "англійська", + "Friendly and helpful": "Доброзичливий і корисний", + "Functions": "Функції", "German": "Німецький", - "Debug / Chain-of-Thought Output": "Вивід налагодження/ланцюжка думок", - "When activated the internal thought process of the assistant will be written to the response datapoint": "Після активації внутрішній процес мислення помічника буде записаний у точку даних відповіді", - "Model Settings": "Параметри моделі", - "Select how many messages should be included for context retention. Temperature defines creativity/randomness of output from 0-1 where 0 is the most predictable output. Set how many tokens should be generated max for assistant responses.": "Виберіть, скільки повідомлень слід включити для збереження контексту. Температура визначає креативність/випадковість результату від 0 до 1, де 0 є найбільш передбачуваним результатом. Встановіть максимальну кількість маркерів для відповідей помічника.", - "Message History (Chat Mode)": "Історія повідомлень (режим чату)", + "Give your personal assistant a name and describe its personality. Choose a model that should be used for your assistant.": "Назвіть свого персонального помічника та опишіть його характер. Виберіть модель, яка буде використовуватися для вашого помічника.", + "How long to wait between retries": "Як довго чекати між повторними спробами", + "How many times should we retry if request to model fails": "Скільки разів ми маємо повторити спробу, якщо запит до моделі не вдається", "If greater 0 previous messages will be included in the request so the tool will stay in context": "Якщо більше 0, попередні повідомлення будуть включені в запит, тому інструмент залишатиметься в контексті", - "Temperature": "температура", - "Setting for creativity/consistency of the models response. (Leave at default if you are not sure!=": "Налаштування на креативність/послідовність відповідей моделей. (Залиште значення за замовчуванням, якщо ви не впевнені!=", - "Max. Tokens": "Макс. Жетони", + "Import objects from enum.rooms": "Імпортувати об’єкти з enum.rooms", + "Language": "Мова", "Limit the response of the tool to your desired amount of tokens.": "Обмежте відповідь інструменту бажаною кількістю токенів.", - "Request Settings": "Запит налаштувань", - "Select if failed requests to the assistant should be retried and how long to wait between tries.": "Виберіть, чи потрібно повторювати невдалі запити до помічника та скільки часу чекати між спробами.", + "Link to LM Studio": "Посилання на LM Studio", + "Link to LocalAI": "Посилання на LocalAI", "Max. Retries": "Макс. Повторні спроби", - "How many times should we retry if request to model fails": "Скільки разів ми маємо повторити спробу, якщо запит до моделі не вдається", - "Retry Delay": "Затримка повтору", - "How long to wait between retries": "Як довго чекати між повторними спробами", + "Max. Tokens": "Макс. Жетони", + "Message History (Chat Mode)": "Історія повідомлень (режим чату)", + "Model": "Модель", + "Model Settings": "Параметри моделі", + "Model is active": "Модель активна", + "Models": "Моделі", + "Name": "Ім'я", + "Name for the Assistant": "Ім'я для помічника", + "Name of the Model": "Назва моделі", + "Object": "Об'єкт", "Object access for assistant": "Доступ до об'єкта для помічника", - "Please add the objects you want to use with the assistant. The assistant will be able to read and control these objects. You can use the button to import all states from your configured room sorting. Make sure to only include needed states to save tokens.": "Будь ласка, додайте об’єкти, які ви хочете використовувати з помічником. Асистент зможе читати ці об’єкти та керувати ними. Ви можете використовувати цю кнопку, щоб імпортувати всі стани з налаштованого сортування кімнат. Переконайтеся, що включено лише необхідні стани для збереження токенів.", - "Import objects from enum.rooms": "Імпортувати об’єкти з enum.rooms", - "Do you really want to import objects from enum.rooms? Existing objects will be reset!": "Ви дійсно хочете імпортувати об’єкти з enum.rooms? Існуючі об'єкти будуть скинуті!", "Objects": "Об'єкти", - "Active": "Активний", - "Assistant can use Object": "Помічник може використовувати Object", - "Sort": "Сортувати", - "Room or sorting for Object": "Кімната або сортування для Об'єкта", - "Object": "Об'єкт", - "Custom functions for assistant": "Спеціальні функції для помічника", - "Define custom functions for the assistant. Make sure to add a good description for your functions so the assistant knows when to call your function. Each function needs a datapoint that starts the process and another datapoint that contains the result of your function.": "Визначте спеціальні функції для помічника. Обов’язково додайте хороший опис своїх функцій, щоб помічник знав, коли викликати вашу функцію. Для кожної функції потрібна точка даних, яка запускає процес, і інша точка даних, яка містить результат вашої функції.", - "Functions": "Функції", - "Assistant can use this function": "Помічник може використовувати цю функцію", - "A descriptive name for your function": "Описова назва вашої функції", - "Description": "опис", - "Describe what your function does and how the data for the request should look. This is important for the assistant to understand your function.": "Опишіть, що робить ваша функція і як мають виглядати дані для запиту. Це важливо, щоб помічник розумів вашу функцію.", - "Datapoint (Request)": "Точка даних (запит)", - "The datapoint that starts the request for the function": "Точка даних, з якої починається запит функції", - "Datapoint (Result)": "Точка даних (результат)", - "The datapoint that contains the result of your function call (Has to be fulfilled in 60 Seconds!)": "Точка даних, яка містить результат виклику вашої функції (має бути виконано за 60 секунд!)", + "Personality": "Особистість", + "Please add the objects you want to use with the assistant. The assistant will be able to read and control these objects. You can use the button to import all states from your configured room sorting. Make sure to only include needed states to save tokens.": "Будь ласка, додайте об’єкти, які ви хочете використовувати з помічником. Асистент зможе читати ці об’єкти та керувати ними. Ви можете використовувати цю кнопку, щоб імпортувати всі стани з налаштованого сортування кімнат. Переконайтеся, що включено лише необхідні стани для збереження токенів.", "Please enter your Anthropic API Token to start using models like Opus, Haiku and Sonnet. If there are new models released you can simply add them in the table to start using them with ai assistants.": "Введіть свій маркер API Anthropic, щоб почати використовувати такі моделі, як Opus, Haiku та Sonnet. Якщо випущено нові моделі, ви можете просто додати їх у таблицю, щоб почати використовувати їх із помічниками штучного інтелекту.", - "Settings": "Налаштування", - "API Token": "Маркер API", - "ERROR: column 'Model' must contain unique text": "ПОМИЛКА: стовпець «Модель» повинен містити унікальний текст", - "Models": "Моделі", - "Model is active": "Модель активна", - "Name of the Model": "Назва моделі", + "Please enter your Deepseek API Token to start using the models. If there are new models released you can simply add them in the table to start using them with ai assistants.": "Будь ласка, введіть свій маркер API DeepSeek, щоб почати використовувати моделі. Якщо випущені нові моделі, ви можете просто додати їх у таблицю, щоб почати використовувати їх із помічниками AI.", "Please enter your OpenAI API Token to start using models like Gpt4, Gpt4-o1, Gpt3-5. If there are new models released you can simply add them in the table to start using them with ai assistants.": "Будь ласка, введіть свій маркер OpenAI API, щоб почати використовувати такі моделі, як Gpt4, Gpt4-o1, Gpt3-5. Якщо випущено нові моделі, ви можете просто додати їх у таблицю, щоб почати використовувати їх із помічниками штучного інтелекту.", - "Please enter your Perplexity API Token to start using the models. If there are new models released you can simply add them in the table to start using them with ai assistants.": "Будь ласка, введіть свій маркер Perplexity API, щоб почати використовувати моделі. Якщо випущено нові моделі, ви можете просто додати їх у таблицю, щоб почати використовувати їх із помічниками штучного інтелекту.", "Please enter your Openrouter API Token to start using the models. If there are new models released you can simply add them in the table to start using them with ai assistants.": "Щоб почати використовувати моделі, введіть свій маркер Openrouter API. Якщо випущено нові моделі, ви можете просто додати їх у таблицю, щоб почати використовувати їх із помічниками штучного інтелекту.", - "You can use your custom or self hosted inference server to run open source models. The server needs to follow the rest api standards used by many providers, see examples below. Please make sure to add your used models by name to the table below.": "Ви можете використовувати власний або власний сервер висновків для запуску моделей з відкритим кодом. Сервер має відповідати іншим стандартам API, які використовуються багатьма постачальниками, див. приклади нижче. Обов’язково додайте вживані моделі за назвою до таблиці нижче.", - "Link to LM Studio": "Посилання на LM Studio", - "Link to LocalAI": "Посилання на LocalAI", + "Please enter your Perplexity API Token to start using the models. If there are new models released you can simply add them in the table to start using them with ai assistants.": "Будь ласка, введіть свій маркер Perplexity API, щоб почати використовувати моделі. Якщо випущено нові моделі, ви можете просто додати їх у таблицю, щоб почати використовувати їх із помічниками штучного інтелекту.", + "Request Settings": "Запит налаштувань", + "Retry Delay": "Затримка повтору", + "Room or sorting for Object": "Кімната або сортування для Об'єкта", + "Select how many messages should be included for context retention. Temperature defines creativity/randomness of output from 0-1 where 0 is the most predictable output. Set how many tokens should be generated max for assistant responses.": "Виберіть, скільки повідомлень слід включити для збереження контексту. Температура визначає креативність/випадковість результату від 0 до 1, де 0 є найбільш передбачуваним результатом. Встановіть максимальну кількість маркерів для відповідей помічника.", + "Select if failed requests to the assistant should be retried and how long to wait between tries.": "Виберіть, чи потрібно повторювати невдалі запити до помічника та скільки часу чекати між спробами.", + "Select the language that should be used by the assistant": "Виберіть мову, якою має користуватися помічник", + "Setting for creativity/consistency of the models response. (Leave at default if you are not sure!=": "Налаштування на креативність/послідовність відповідей моделей. (Залиште значення за замовчуванням, якщо ви не впевнені!=", + "Settings": "Налаштування", + "Sort": "Сортувати", + "Temperature": "температура", + "The datapoint that contains the result of your function call (Has to be fulfilled in 60 Seconds!)": "Точка даних, яка містить результат виклику вашої функції (має бути виконано за 60 секунд!)", + "The datapoint that starts the request for the function": "Точка даних, з якої починається запит функції", "URL for Inference Server": "URL для сервера висновків", - "API Token for Inference Server": "Маркер API для сервера висновків" + "When activated the internal thought process of the assistant will be written to the response datapoint": "Після активації внутрішній процес мислення помічника буде записаний у точку даних відповіді", + "Which Model should be used": "Яку модель слід використовувати", + "You can use your custom or self hosted inference server to run open source models. The server needs to follow the rest api standards used by many providers, see examples below. Please make sure to add your used models by name to the table below.": "Ви можете використовувати власний або власний сервер висновків для запуску моделей з відкритим кодом. Сервер має відповідати іншим стандартам API, які використовуються багатьма постачальниками, див. приклади нижче. Обов’язково додайте вживані моделі за назвою до таблиці нижче." } diff --git a/admin/i18n/zh-cn/translations.json b/admin/i18n/zh-cn/translations.json index a0ecfbf..4fb6b32 100644 --- a/admin/i18n/zh-cn/translations.json +++ b/admin/i18n/zh-cn/translations.json @@ -1,67 +1,68 @@ { + "A descriptive name for your function": "函数的描述性名称", + "API Token": "API令牌", + "API Token for Inference Server": "推理服务器的 API 令牌", + "Active": "积极的", "Assistant Settings": "助手设置", - "Give your personal assistant a name and describe its personality. Choose a model that should be used for your assistant.": "为您的私人助理命名并描述其个性。选择适合您的助手的型号。", - "Name": "姓名", - "Name for the Assistant": "助理的名字", - "Model": "模型", - "Which Model should be used": "应使用哪种型号", - "Personality": "性格", + "Assistant can use Object": "助手可以使用对象", + "Assistant can use this function": "助手可以使用此功能", + "Custom functions for assistant": "助手自定义功能", + "Datapoint (Request)": "数据点(请求)", + "Datapoint (Result)": "数据点(结果)", + "Debug / Chain-of-Thought Output": "调试/思路输出", + "Define custom functions for the assistant. Make sure to add a good description for your functions so the assistant knows when to call your function. Each function needs a datapoint that starts the process and another datapoint that contains the result of your function.": "为助手定义自定义功能。确保为您的函数添加良好的描述,以便助手知道何时调用您的函数。每个函数都需要一个启动进程的数据点和另一个包含函数结果的数据点。", "Describe the personality of your assistant": "描述一下你的助理的性格", - "Friendly and helpful": "友好且乐于助人", - "Language": "语言", - "Select the language that should be used by the assistant": "选择助手应使用的语言", + "Describe what your function does and how the data for the request should look. This is important for the assistant to understand your function.": "描述您的函数的作用以及请求的数据应如何显示。这对于助理了解您的职能非常重要。", + "Description": "描述", + "Do you really want to import objects from enum.rooms? Existing objects will be reset!": "您真的想从 enum.rooms 导入对象吗?现有对象将被重置!", + "ERROR: column 'Model' must contain unique text": "错误:“模型”列必须包含唯一文本", "English": "英语", + "Friendly and helpful": "友好且乐于助人", + "Functions": "功能", "German": "德语", - "Debug / Chain-of-Thought Output": "调试/思路输出", - "When activated the internal thought process of the assistant will be written to the response datapoint": "激活后,助手的内部思维过程将被写入响应数据点", - "Model Settings": "模型设置", - "Select how many messages should be included for context retention. Temperature defines creativity/randomness of output from 0-1 where 0 is the most predictable output. Set how many tokens should be generated max for assistant responses.": "选择应包含多少消息以保留上下文。温度定义了 0-1 范围内输出的创造性/随机性,其中 0 是最可预测的输出。设置助理响应时应生成的最大令牌数。", - "Message History (Chat Mode)": "消息历史记录(聊天模式)", + "Give your personal assistant a name and describe its personality. Choose a model that should be used for your assistant.": "为您的私人助理命名并描述其个性。选择适合您的助手的型号。", + "How long to wait between retries": "重试之间等待多长时间", + "How many times should we retry if request to model fails": "如果模型请求失败,我们应该重试多少次", "If greater 0 previous messages will be included in the request so the tool will stay in context": "如果请求中将包含大于 0 的先前消息,则该工具将保留在上下文中", - "Temperature": "温度", - "Setting for creativity/consistency of the models response. (Leave at default if you are not sure!=": "设置模型响应的创造力/一致性。 (如果不确定就保留默认值!=", - "Max. Tokens": "最大限度。代币", + "Import objects from enum.rooms": "从 enum.rooms 导入对象", + "Language": "语言", "Limit the response of the tool to your desired amount of tokens.": "将工具的响应限制为您所需的令牌数量。", - "Request Settings": "请求设置", - "Select if failed requests to the assistant should be retried and how long to wait between tries.": "选择是否应重试对助手的失败请求以及两次尝试之间等待的时间。", + "Link to LM Studio": "LM Studio 链接", + "Link to LocalAI": "链接到本地​​人工智能", "Max. Retries": "最大限度。重试", - "How many times should we retry if request to model fails": "如果模型请求失败,我们应该重试多少次", - "Retry Delay": "重试延迟", - "How long to wait between retries": "重试之间等待多长时间", + "Max. Tokens": "最大限度。代币", + "Message History (Chat Mode)": "消息历史记录(聊天模式)", + "Model": "模型", + "Model Settings": "模型设置", + "Model is active": "模型处于活动状态", + "Models": "型号", + "Name": "姓名", + "Name for the Assistant": "助理的名字", + "Name of the Model": "型号名称", + "Object": "目的", "Object access for assistant": "助理的对象访问", - "Please add the objects you want to use with the assistant. The assistant will be able to read and control these objects. You can use the button to import all states from your configured room sorting. Make sure to only include needed states to save tokens.": "请添加您想要与助手一起使用的对象。助手将能够读取和控制这些对象。您可以使用该按钮从配置的房间排序中导入所有状态。确保仅包含保存令牌所需的状态。", - "Import objects from enum.rooms": "从 enum.rooms 导入对象", - "Do you really want to import objects from enum.rooms? Existing objects will be reset!": "您真的想从 enum.rooms 导入对象吗?现有对象将被重置!", "Objects": "对象", - "Active": "积极的", - "Assistant can use Object": "助手可以使用对象", - "Sort": "种类", - "Room or sorting for Object": "房间或对象排序", - "Object": "目的", - "Custom functions for assistant": "助手自定义功能", - "Define custom functions for the assistant. Make sure to add a good description for your functions so the assistant knows when to call your function. Each function needs a datapoint that starts the process and another datapoint that contains the result of your function.": "为助手定义自定义功能。确保为您的函数添加良好的描述,以便助手知道何时调用您的函数。每个函数都需要一个启动进程的数据点和另一个包含函数结果的数据点。", - "Functions": "功能", - "Assistant can use this function": "助手可以使用此功能", - "A descriptive name for your function": "函数的描述性名称", - "Description": "描述", - "Describe what your function does and how the data for the request should look. This is important for the assistant to understand your function.": "描述您的函数的作用以及请求的数据应如何显示。这对于助理了解您的职能非常重要。", - "Datapoint (Request)": "数据点(请求)", - "The datapoint that starts the request for the function": "启动函数请求的数据点", - "Datapoint (Result)": "数据点(结果)", - "The datapoint that contains the result of your function call (Has to be fulfilled in 60 Seconds!)": "包含函数调用结果的数据点(必须在 60 秒内完成!)", + "Personality": "性格", + "Please add the objects you want to use with the assistant. The assistant will be able to read and control these objects. You can use the button to import all states from your configured room sorting. Make sure to only include needed states to save tokens.": "请添加您想要与助手一起使用的对象。助手将能够读取和控制这些对象。您可以使用该按钮从配置的房间排序中导入所有状态。确保仅包含保存令牌所需的状态。", "Please enter your Anthropic API Token to start using models like Opus, Haiku and Sonnet. If there are new models released you can simply add them in the table to start using them with ai assistants.": "请输入您的 Anthropic API 令牌以开始使用 Opus、Haiku 和 Sonnet 等模型。如果有新模型发布,您只需将它们添加到表中即可开始与人工智能助手一起使用。", - "Settings": "设置", - "API Token": "API令牌", - "ERROR: column 'Model' must contain unique text": "错误:“模型”列必须包含唯一文本", - "Models": "型号", - "Model is active": "模型处于活动状态", - "Name of the Model": "型号名称", + "Please enter your Deepseek API Token to start using the models. If there are new models released you can simply add them in the table to start using them with ai assistants.": "请输入您的DeepSeek API令牌以开始使用模型。如果发布了新型号,您只需在表中添加它们即可开始与AI助手一起使用它们。", "Please enter your OpenAI API Token to start using models like Gpt4, Gpt4-o1, Gpt3-5. If there are new models released you can simply add them in the table to start using them with ai assistants.": "请输入您的 OpenAI API 令牌以开始使用 Gpt4、Gpt4-o1、Gpt3-5 等模型。如果有新模型发布,您只需将它们添加到表中即可开始与人工智能助手一起使用。", - "Please enter your Perplexity API Token to start using the models. If there are new models released you can simply add them in the table to start using them with ai assistants.": "请输入您的 Perplexity API 令牌以开始使用模型。如果有新模型发布,您只需将它们添加到表中即可开始与人工智能助手一起使用。", "Please enter your Openrouter API Token to start using the models. If there are new models released you can simply add them in the table to start using them with ai assistants.": "请输入您的 Openrouter API 令牌以开始使用模型。如果有新模型发布,您只需将它们添加到表中即可开始与人工智能助手一起使用。", - "You can use your custom or self hosted inference server to run open source models. The server needs to follow the rest api standards used by many providers, see examples below. Please make sure to add your used models by name to the table below.": "您可以使用自定义或自托管推理服务器来运行开源模型。服务器需要遵循许多提供商使用的其余 API 标准,请参阅下面的示例。请确保将您使用的型号按名称添加到下表中。", - "Link to LM Studio": "LM Studio 链接", - "Link to LocalAI": "链接到本地​​人工智能", + "Please enter your Perplexity API Token to start using the models. If there are new models released you can simply add them in the table to start using them with ai assistants.": "请输入您的 Perplexity API 令牌以开始使用模型。如果有新模型发布,您只需将它们添加到表中即可开始与人工智能助手一起使用。", + "Request Settings": "请求设置", + "Retry Delay": "重试延迟", + "Room or sorting for Object": "房间或对象排序", + "Select how many messages should be included for context retention. Temperature defines creativity/randomness of output from 0-1 where 0 is the most predictable output. Set how many tokens should be generated max for assistant responses.": "选择应包含多少消息以保留上下文。温度定义了 0-1 范围内输出的创造性/随机性,其中 0 是最可预测的输出。设置助理响应时应生成的最大令牌数。", + "Select if failed requests to the assistant should be retried and how long to wait between tries.": "选择是否应重试对助手的失败请求以及两次尝试之间等待的时间。", + "Select the language that should be used by the assistant": "选择助手应使用的语言", + "Setting for creativity/consistency of the models response. (Leave at default if you are not sure!=": "设置模型响应的创造力/一致性。 (如果不确定就保留默认值!=", + "Settings": "设置", + "Sort": "种类", + "Temperature": "温度", + "The datapoint that contains the result of your function call (Has to be fulfilled in 60 Seconds!)": "包含函数调用结果的数据点(必须在 60 秒内完成!)", + "The datapoint that starts the request for the function": "启动函数请求的数据点", "URL for Inference Server": "推理服务器的 URL", - "API Token for Inference Server": "推理服务器的 API 令牌" + "When activated the internal thought process of the assistant will be written to the response datapoint": "激活后,助手的内部思维过程将被写入响应数据点", + "Which Model should be used": "应使用哪种型号", + "You can use your custom or self hosted inference server to run open source models. The server needs to follow the rest api standards used by many providers, see examples below. Please make sure to add your used models by name to the table below.": "您可以使用自定义或自托管推理服务器来运行开源模型。服务器需要遵循许多提供商使用的其余 API 标准,请参阅下面的示例。请确保将您使用的型号按名称添加到下表中。" } diff --git a/admin/jsonConfig.json b/admin/jsonConfig.json index 6561c39..5efa762 100644 --- a/admin/jsonConfig.json +++ b/admin/jsonConfig.json @@ -688,6 +688,86 @@ } }, "tab_7": { + "type": "panel", + "label": "Deepseek", + "items": { + "deviderTxt1": { + "type": "staticText", + "text": "Please enter your Deepseek API Token to start using the models. If there are new models released you can simply add them in the table to start using them with ai assistants.", + "newLine": true, + "xs": 12, + "sm": 12, + "md": 12, + "lg": 12, + "xl": 12 + }, + "dividerHdr": { + "newLine": true, + "type": "header", + "text": "Settings", + "size": 2 + }, + "deep_api_token": { + "type": "password", + "label": "API Token", + "xs": 12, + "sm": 12, + "md": 6, + "lg": 4, + "xl": 4 + }, + "model_name_unique_error": { + "type": "staticText", + "text": "ERROR: column 'Model' must contain unique text", + "newLine": true, + "hidden": "const x={}; for(let ii=0; data.pplx_models && ii 0 ) { for (const timeout of responseData.createTimeouts) { - setTimeout( + this.adapter.setTimeout( () => { const timeoutExecutionData = { type: "wakeUpFromTimeout", diff --git a/main.js b/main.js index 28b91ee..93093be 100644 --- a/main.js +++ b/main.js @@ -12,6 +12,7 @@ const AnthropicAiProvider = require("./lib/providers/anthropic-ai-provider"); const OpenAiProvider = require("./lib/providers/openai-ai-provider"); const PerplexityAiProvider = require("./lib/providers/perplexity-ai-provider"); const OpenRouterAiProvider = require("./lib/providers/openrouter-ai-provider"); +const DeepseekAiProvider = require("./lib/providers/deepseek-ai-provider"); const CustomAiProvider = require("./lib/providers/custom-ai-provider"); // Tools @@ -80,7 +81,7 @@ class AiAssistant extends utils.Adapter { // Create Models and Assistant objects await this.setObjectAsync("Models", { - type: "device", + type: "folder", common: { name: "AI Models", desc: "Statistics and Data for used AI Models", @@ -89,7 +90,7 @@ class AiAssistant extends utils.Adapter { }); await this.setObjectAsync("Assistant", { - type: "device", + type: "folder", common: { name: "Assistant", desc: "Interact with your Assistant", @@ -98,7 +99,7 @@ class AiAssistant extends utils.Adapter { }); await this.setObjectAsync("Cronjobs", { - type: "device", + type: "folder", common: { name: "Cronjobs", desc: "Cronjobs created by Assistant", @@ -107,7 +108,7 @@ class AiAssistant extends utils.Adapter { }); await this.setObjectAsync("Triggers", { - type: "device", + type: "folder", common: { name: "Triggers", desc: "Triggers created by Assistant", @@ -122,7 +123,7 @@ class AiAssistant extends utils.Adapter { this.log.debug(`Initializing objects for model: ${model}`); await this.setObjectAsync(`Models.${model}`, { - type: "device", + type: "folder", common: { name: model, desc: `Model ${modelName} for the AI Assistant`, @@ -131,7 +132,7 @@ class AiAssistant extends utils.Adapter { }); await this.setObjectAsync(`Models.${model}.statistics`, { - type: "device", + type: "folder", common: { name: "Statistics", desc: `Statistics for the model ${modelName} like requests count, tokens used, etc.`, @@ -140,7 +141,7 @@ class AiAssistant extends utils.Adapter { }); await this.setObjectAsync(`Models.${model}.response`, { - type: "device", + type: "folder", common: { name: "Response data", desc: `Response data for the model ${modelName} like raw response, error response, etc.`, @@ -149,7 +150,7 @@ class AiAssistant extends utils.Adapter { }); await this.setObjectAsync(`Models.${model}.request`, { - type: "device", + type: "folder", common: { name: "Request data", desc: `Request data for the model ${modelName} like request body, state, etc.`, @@ -163,7 +164,7 @@ class AiAssistant extends utils.Adapter { name: "Request state", desc: "State for the running inference request", type: "string", - role: "indicator", + role: "text", read: true, write: false, def: "", @@ -177,7 +178,7 @@ class AiAssistant extends utils.Adapter { name: "Request body", desc: "Sent body for the running inference request", type: "string", - role: "indicator", + role: "json", read: true, write: false, def: "", @@ -191,7 +192,7 @@ class AiAssistant extends utils.Adapter { name: "Raw response", desc: `Raw response for model${modelName}`, type: "string", - role: "indicator", + role: "json", read: true, write: false, def: "", @@ -205,7 +206,7 @@ class AiAssistant extends utils.Adapter { name: "Error response", desc: `Error response for model${modelName}`, type: "string", - role: "indicator", + role: "text", read: true, write: false, def: "", @@ -219,7 +220,7 @@ class AiAssistant extends utils.Adapter { name: "Input tokens", desc: `Used input tokens for model${modelName}`, type: "number", - role: "indicator", + role: "state", read: true, write: false, def: 0, @@ -233,7 +234,7 @@ class AiAssistant extends utils.Adapter { name: "Output tokens", desc: `Used output tokens for model${modelName}`, type: "number", - role: "indicator", + role: "state", read: true, write: false, def: 0, @@ -247,7 +248,7 @@ class AiAssistant extends utils.Adapter { name: "Count requests", desc: `Count of requests for model${modelName}`, type: "number", - role: "indicator", + role: "state", read: true, write: false, def: 0, @@ -261,7 +262,7 @@ class AiAssistant extends utils.Adapter { name: "Last request", desc: `Last request for model${modelName}`, type: "string", - role: "indicator", + role: "date", read: true, write: false, def: "", @@ -302,7 +303,7 @@ class AiAssistant extends utils.Adapter { }); await this.setObjectAsync("Assistant.statistics", { - type: "device", + type: "folder", common: { name: "Statistics", desc: "Statistics for the Assistant like requests count, tokens used, etc.", @@ -311,7 +312,7 @@ class AiAssistant extends utils.Adapter { }); await this.setObjectAsync("Assistant.response", { - type: "device", + type: "folder", common: { name: "Response data", desc: "Response data for the Assistant like raw response, error response, etc.", @@ -320,7 +321,7 @@ class AiAssistant extends utils.Adapter { }); await this.setObjectAsync("Assistant.request", { - type: "device", + type: "folder", common: { name: "Request data", desc: "Request data for the Assistant like request body, state, etc.", @@ -334,7 +335,7 @@ class AiAssistant extends utils.Adapter { name: "Previous messages", desc: "Previous messages for the Assistant", type: "string", - role: "text", + role: "json", read: true, write: false, def: '{"messages": []}', @@ -349,7 +350,7 @@ class AiAssistant extends utils.Adapter { desc: "Clear previous message history for the Assistant", type: "boolean", role: "button", - read: true, + read: false, write: true, def: true, }, @@ -362,7 +363,7 @@ class AiAssistant extends utils.Adapter { name: "State", desc: "State for the running inference request", type: "string", - role: "indicator", + role: "text", read: true, write: false, def: "", @@ -376,7 +377,7 @@ class AiAssistant extends utils.Adapter { name: "Request body", desc: "Sent body for the running inference request", type: "string", - role: "indicator", + role: "json", read: true, write: false, def: "", @@ -390,7 +391,7 @@ class AiAssistant extends utils.Adapter { name: "Response Raw", desc: "Raw response from Assistant", type: "string", - role: "indicator", + role: "json", read: true, write: false, def: "", @@ -404,7 +405,7 @@ class AiAssistant extends utils.Adapter { name: "Error response", desc: "Error response from Assistant", type: "string", - role: "indicator", + role: "text", read: true, write: false, def: "", @@ -418,7 +419,7 @@ class AiAssistant extends utils.Adapter { name: "Input tokens", desc: "Used input tokens for Assistant", type: "number", - role: "indicator", + role: "state", read: true, write: false, def: 0, @@ -432,7 +433,7 @@ class AiAssistant extends utils.Adapter { name: "Output tokens", desc: "Used output tokens for Assistant", type: "number", - role: "indicator", + role: "state", read: true, write: false, def: 0, @@ -446,7 +447,7 @@ class AiAssistant extends utils.Adapter { name: "Requests count", desc: "Count of requests for Assistant", type: "number", - role: "indicator", + role: "state", read: true, write: false, def: 0, @@ -460,7 +461,7 @@ class AiAssistant extends utils.Adapter { name: "Last request", desc: "Last request for Assistant", type: "string", - role: "indicator", + role: "date", read: true, write: false, def: "", @@ -481,7 +482,7 @@ class AiAssistant extends utils.Adapter { onUnload(callback) { try { for (const timeout of this.timeouts) { - clearTimeout(timeout); + this.clearTimeout(timeout); } callback(); } catch (e) { @@ -497,10 +498,12 @@ class AiAssistant extends utils.Adapter { * @param state - The new state. */ async onStateChange(id, state) { + // Only handle state changes if they are not acknowledged + if (state && state.ack !== false) { + return; + } if (state) { // The state was changed - //this.log.debug(`state ${id} changed: ${state.val} (ack = ${state.ack})`); - if (id.includes(".clear_messages") && state.val) { await this.clearHistory(); } @@ -627,6 +630,7 @@ class AiAssistant extends utils.Adapter { val: JSON.stringify(modelResponse.responseData), ack: true, }); + requestCompleted = false; } } else { this.log.warn("Assistant response text is empty, cant handle response!"); @@ -1089,7 +1093,7 @@ FunctionResultData: ${JSON.stringify(functionResponse.result)} val: I18n.translate("assistant_function_delete_history_success"), ack: true, }); - setTimeout(async () => { + this.setTimeout(async () => { await this.clearHistory(); }, 3000); return null; @@ -1265,20 +1269,35 @@ FunctionResultData: ${JSON.stringify(functionResponse.result)} */ getAvailableModels() { const models = []; - for (const model of this.config.anth_models) { - models.push({ label: model.model_name, value: model.model_name }); + if (this.config.anth_models) { + for (const model of this.config.anth_models) { + models.push({ label: `(Anthropic) ${model.model_name}`, value: model.model_name }); + } } - for (const model of this.config.opai_models) { - models.push({ label: model.model_name, value: model.model_name }); + if (this.config.opai_models) { + for (const model of this.config.opai_models) { + models.push({ label: `(OpenAI) ${model.model_name}`, value: model.model_name }); + } } - for (const model of this.config.custom_models) { - models.push({ label: model.model_name, value: model.model_name }); + if (this.config.custom_models) { + for (const model of this.config.custom_models) { + models.push({ label: `(Custom) ${model.model_name}`, value: model.model_name }); + } } - for (const model of this.config.pplx_models) { - models.push({ label: model.model_name, value: model.model_name }); + if (this.config.pplx_models) { + for (const model of this.config.pplx_models) { + models.push({ label: `(Perplexity) ${model.model_name}`, value: model.model_name }); + } } - for (const model of this.config.oprt_models) { - models.push({ label: model.model_name, value: model.model_name }); + if (this.config.oprt_models) { + for (const model of this.config.oprt_models) { + models.push({ label: `(OpenRouter) ${model.model_name}`, value: model.model_name }); + } + } + if (this.config.deep_models) { + for (const model of this.config.deep_models) { + models.push({ label: `(Deepseek) ${model.model_name}`, value: model.model_name }); + } } return models; } @@ -1296,9 +1315,10 @@ FunctionResultData: ${JSON.stringify(functionResponse.result)} const opai_models = this.config.opai_models; const pplx_models = this.config.pplx_models; const oprt_models = this.config.oprt_models; + const deep_models = this.config.deep_models; const custom_models = this.config.custom_models; - if (anth_models.length > 0) { + if (anth_models) { for (const model of anth_models) { if (model.model_name == requestedModel && model.model_active) { this.log.debug(`Provider for Model ${model.model_name} is Anthropic`); @@ -1307,7 +1327,7 @@ FunctionResultData: ${JSON.stringify(functionResponse.result)} } } - if (opai_models.length > 0) { + if (opai_models) { for (const model of opai_models) { if (model.model_name == requestedModel && model.model_active) { this.log.debug(`Provider for Model ${model.model_name} is OpenAI`); @@ -1316,7 +1336,7 @@ FunctionResultData: ${JSON.stringify(functionResponse.result)} } } - if (custom_models.length > 0) { + if (custom_models) { for (const model of custom_models) { if (model.model_name == requestedModel && model.model_active) { this.log.debug(`Provider for Model ${model.model_name} is Custom/Selfhosted`); @@ -1325,7 +1345,7 @@ FunctionResultData: ${JSON.stringify(functionResponse.result)} } } - if (pplx_models.length > 0) { + if (pplx_models) { for (const model of pplx_models) { if (model.model_name == requestedModel && model.model_active) { this.log.debug(`Provider for Model ${model.model_name} is Perplexity`); @@ -1334,7 +1354,7 @@ FunctionResultData: ${JSON.stringify(functionResponse.result)} } } - if (oprt_models.length > 0) { + if (oprt_models) { for (const model of oprt_models) { if (model.model_name == requestedModel && model.model_active) { this.log.debug(`Provider for Model ${model.model_name} is OpenRouter`); @@ -1343,6 +1363,15 @@ FunctionResultData: ${JSON.stringify(functionResponse.result)} } } + if (deep_models) { + for (const model of deep_models) { + if (model.model_name == requestedModel && model.model_active) { + this.log.debug(`Provider for Model ${model.model_name} is Deepseek`); + return new DeepseekAiProvider(this); + } + } + } + this.log.warn(`No provider found for model ${requestedModel}`); return null; } diff --git a/package.json b/package.json index 022f84b..378775f 100644 --- a/package.json +++ b/package.json @@ -1,6 +1,6 @@ { "name": "iobroker.ai-assistant", - "version": "0.1.2", + "version": "0.1.3", "description": "AI Assistant adapter allows you to control your ioBroker trought artifical intelligence based on LLMs", "author": { "name": "ToGe3688",