Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[bug]: ConnectionError when offline #6702

Closed
1 task done
TobiasReich opened this issue Jul 30, 2024 · 2 comments
Closed
1 task done

[bug]: ConnectionError when offline #6702

TobiasReich opened this issue Jul 30, 2024 · 2 comments
Labels
bug Something isn't working

Comments

@TobiasReich
Copy link

Is there an existing issue for this problem?

  • I have searched the existing issues

Operating system

Windows

GPU vendor

Nvidia (CUDA)

GPU model

Geforce GTX 4080

GPU VRAM

16GB

Version number

4.2.7

Browser

Firefox

Python dependencies

No response

What happened

I'm using an offline system but when I load invokeAI and select one of the additionally loaded checkpoint models, I need an internet connection in order to load a config yaml file.
This was not the case in 4.2.4 so it must be recent.

The exception in the UI looks like that:

Server Error

ConnectionError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /CompVis/stable-diffusion/main/configs/stable-diffusion/v1-inference.yaml (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x0000022C3B65F220>: Failed to establish a new connection: [Errno 11001] getaddrinfo failed'))

Looking at the logs I see this more detailed stack trace:

[InvokeAI]::ERROR --> Error while invoking session b0cadb6a-0994-4283-8b2e-f7d4e84ffe66, invocation f24e724e-e70a-487b-b48a-8cc1e364e1d3 (compel): HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /CompVis/stable-diffusion/main/configs/stable-diffusion/v1-inference.yaml (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x0000021EBAC0AF50>: Failed to establish a new connection: [Errno 11001] getaddrinfo failed'))
[2024-07-31 00:00:44,236]::[InvokeAI]::ERROR --> Traceback (most recent call last):
  File "C:\Users\T\invokeai\.venv\lib\site-packages\urllib3\connection.py", line 174, in _new_conn
    conn = connection.create_connection(
  File "C:\Users\T\invokeai\.venv\lib\site-packages\urllib3\util\connection.py", line 72, in create_connection
    for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
  File "C:\Users\T\AppData\Local\Programs\Python\Python310\lib\socket.py", line 955, in getaddrinfo
    for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno 11001] getaddrinfo failed

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Users\T\invokeai\.venv\lib\site-packages\urllib3\connectionpool.py", line 715, in urlopen
    httplib_response = self._make_request(
  File "C:\Users\T\invokeai\.venv\lib\site-packages\urllib3\connectionpool.py", line 404, in _make_request
    self._validate_conn(conn)
  File "C:\Users\T\invokeai\.venv\lib\site-packages\urllib3\connectionpool.py", line 1060, in _validate_conn
    conn.connect()
  File "C:\Users\T\invokeai\.venv\lib\site-packages\urllib3\connection.py", line 363, in connect
    self.sock = conn = self._new_conn()
  File "C:\Users\T\invokeai\.venv\lib\site-packages\urllib3\connection.py", line 186, in _new_conn
    raise NewConnectionError(
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x0000021EBAC0AF50>: Failed to establish a new connection: [Errno 11001] getaddrinfo failed

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Users\T\invokeai\.venv\lib\site-packages\requests\adapters.py", line 489, in send
    resp = conn.urlopen(
  File "C:\Users\T\invokeai\.venv\lib\site-packages\urllib3\connectionpool.py", line 801, in urlopen
    retries = retries.increment(
  File "C:\Users\T\invokeai\.venv\lib\site-packages\urllib3\util\retry.py", line 594, in increment
    raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /CompVis/stable-diffusion/main/configs/stable-diffusion/v1-inference.yaml (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x0000021EBAC0AF50>: Failed to establish a new connection: [Errno 11001] getaddrinfo failed'))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Users\T\invokeai\.venv\lib\site-packages\invokeai\app\services\session_processor\session_processor_default.py", line 129, in run_node
    output = invocation.invoke_internal(context=context, services=self._services)
  File "C:\Users\T\invokeai\.venv\lib\site-packages\invokeai\app\invocations\baseinvocation.py", line 289, in invoke_internal
    output = self.invoke(context)
  File "C:\Users\T\invokeai\.venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\T\invokeai\.venv\lib\site-packages\invokeai\app\invocations\compel.py", line 66, in invoke
    tokenizer_info = context.models.load(self.clip.tokenizer)
  File "C:\Users\T\invokeai\.venv\lib\site-packages\invokeai\app\services\shared\invocation_context.py", line 369, in load
    return self._services.model_manager.load.load_model(model, _submodel_type)
  File "C:\Users\T\invokeai\.venv\lib\site-packages\invokeai\app\services\model_load\model_load_default.py", line 70, in load_model
    ).load_model(model_config, submodel_type)
  File "C:\Users\T\invokeai\.venv\lib\site-packages\invokeai\backend\model_manager\load\load_default.py", line 56, in load_model
    locker = self._load_and_cache(model_config, submodel_type)
  File "C:\Users\T\invokeai\.venv\lib\site-packages\invokeai\backend\model_manager\load\load_default.py", line 75, in _load_and_cache
    loaded_model = self._load_model(config, submodel_type)
  File "C:\Users\T\invokeai\.venv\lib\site-packages\invokeai\backend\model_manager\load\model_loaders\stable_diffusion.py", line 57, in _load_model
    return self._load_from_singlefile(config, submodel_type)
  File "C:\Users\T\invokeai\.venv\lib\site-packages\invokeai\backend\model_manager\load\model_loaders\stable_diffusion.py", line 122, in _load_from_singlefile
    pipeline = load_class.from_single_file(
  File "C:\Users\T\invokeai\.venv\lib\site-packages\huggingface_hub\utils\_validators.py", line 114, in _inner_fn
    return fn(*args, **kwargs)
  File "C:\Users\T\invokeai\.venv\lib\site-packages\diffusers\loaders\single_file.py", line 253, in from_single_file
    original_config, checkpoint = fetch_ldm_config_and_checkpoint(
  File "C:\Users\T\invokeai\.venv\lib\site-packages\diffusers\loaders\single_file_utils.py", line 324, in fetch_ldm_config_and_checkpoint
    original_config = fetch_original_config(class_name, checkpoint, original_config_file)
  File "C:\Users\T\invokeai\.venv\lib\site-packages\diffusers\loaders\single_file_utils.py", line 396, in fetch_original_config
    original_config_file = infer_original_config_file(pipeline_class_name, checkpoint)
  File "C:\Users\T\invokeai\.venv\lib\site-packages\diffusers\loaders\single_file_utils.py", line 382, in infer_original_config_file
    original_config_file = BytesIO(requests.get(config_url).content)
  File "C:\Users\T\invokeai\.venv\lib\site-packages\requests\api.py", line 73, in get
    return request("get", url, params=params, **kwargs)
  File "C:\Users\T\invokeai\.venv\lib\site-packages\requests\api.py", line 59, in request
    return session.request(method=method, url=url, **kwargs)
  File "C:\Users\T\invokeai\.venv\lib\site-packages\requests\sessions.py", line 587, in request
    resp = self.send(prep, **send_kwargs)
  File "C:\Users\T\invokeai\.venv\lib\site-packages\requests\sessions.py", line 701, in send
    r = adapter.send(request, **kwargs)
  File "C:\Users\T\invokeai\.venv\lib\site-packages\requests\adapters.py", line 565, in send
    raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /CompVis/stable-diffusion/main/configs/stable-diffusion/v1-inference.yaml (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x0000021EBAC0AF50>: Failed to establish a new connection: [Errno 11001] getaddrinfo failed'))

This happens with all models I load from hard disk (as mentioned, this is an offline system) whereas the "preinstalled" models seems to work (I assume in the installation process the required config was downloaded).

All this was working perfectly fine in 4.2.4 and it happens with any model I add manually (e.g. copy to the models folder and use the model manager to install it locally).
I hope there is a workaround for systems that are not permanently connected.

What you expected to happen

In previous versions a local model was simply installed and worked fine. Now it tries to load stuff from the internet. Would be great if that works again for offline workstations.

How to reproduce the problem

Have a clean installation of InvokeAI 4.2.7
Copy any checkpoint model to the workstation
Use the ModelManager and install the given model
Now generate any image.

The following message pops up, no generation is done:

Server Error

ConnectionError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /CompVis/stable-diffusion/main/configs/stable-diffusion/v1-inference.yaml (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x0000022C3B65F220>: Failed to establish a new connection: [Errno 11001] getaddrinfo failed'))

Additional context

No response

Discord username

No response

@TobiasReich TobiasReich added the bug Something isn't working label Jul 30, 2024
@psychedelicious
Copy link
Collaborator

This is a duplicate of #6623

@webtv123
Copy link

this problem has been going on for a long time to fix it so it runs off line and they keep upgrading and ignoring the issue, word is getting around that this maybe spyware because if it worked before then why does it no work now, Somebody had to intentionally make it run online only.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants