Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GMA : http2: frame too large #90

Open
ketomagasaki opened this issue Feb 4, 2024 · 11 comments
Open

GMA : http2: frame too large #90

ketomagasaki opened this issue Feb 4, 2024 · 11 comments
Assignees

Comments

@ketomagasaki
Copy link

Hello,
In search of what is filling my log files, I am looking on the client side. On the client, I have 2 errors in a loop, but I will focus on the first one. Here are the logs from a Node journalctl -e:

Feb 04 11:12:23 CT-MARIADB-NODE1 gma[3644939]: time="2024-02-04T11:12:23Z" level=info msg=Connecting...
Feb 04 11:12:23 CT-MARIADB-NODE1 gma[3644939]: time="2024-02-04T11:12:23Z" level=info msg="Creating agentcom stream..."
Feb 04 11:12:23 CT-MARIADB-NODE1 gma[3644939]: time="2024-02-04T11:12:23Z" level=error msg="Failed to create stream: rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: http2: frame too large\""
Feb 04 11:12:23 CT-MARIADB-NODE1 gma[3644939]: time="2024-02-04T11:12:23Z" level=error msg="Error while serving: rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: http2: frame too large\"; sleeping for 1 second"

Apparently, GMA stands for Galera Manager Agent, and the indicated error is generally caused by a TLS problem.
But here I'm stuck; I can't find where GMA is configured or where the client's config files that GMA uses are located so that I can tell it to:

  • Use my private authority certificate
  • Disable certificate verification

Do you have any idea on this matter?
Thank you.

@denisgcm
Copy link
Collaborator

denisgcm commented Feb 4, 2024

Thank you for reporting the issue. Let me investigate it and come with a solution to you.

@denisgcm
Copy link
Collaborator

denisgcm commented Feb 4, 2024

As a short-term solution you can stop the agent for now by running systemctl stop gma. It won't break anything as the agent is still under the development and we deploy it for the future functionality to come. As a long-term solution, we'll prepare and updated version of the agent (which is bundled along with gmd - Galera Manager Daemon).

To make it smoother, can you give us a little bit more details on how you configured Galera Manager? Did you use gm-installer? What parameters did you provide to it during the installation?

@ketomagasaki
Copy link
Author

Thank you for the response.
To simplify, here is the procedure I wrote for the installation of GM and Node N°1:

On the LXC container: CT-MARIADB-GM:

apt install curl software-properties-common
curl -sO https://galeracluster.com/galera-manager/gm-installer
chmod +x gm-installer
./gm-installer install
  • License agreement : a
  • GMD Package Repository URL :
  • GMD Admin User Login : xxxx
  • GMD Admin Password : xxxx
  • Enter your domain name or IP of the server : maria-gm.xxxxxxx
  • Enable https? : y
  • Use LetsEncrypt Certbot to autogenerate the certificates : n
  • Do you want to provide your own SSL CA? n
  • Use your own SSL certificate (y), or let installer generate one (n)? y
  • SSL Host Certificate : /etc/ssl/certs/xxxxxxx.crt
  • SSL Host Certificate Key : /etc/ssl/certs/xxxxxx.key

Configuration :
Management > ADD CLUSTER > Deplay cluster on user-provided hosts :
Name : maria-cluster
Node DB Engine : mariadb:10.11 LTS
Host system : ubuntu:22.04 LTS

On the LXC container: CT-MARIADB-NODE1:

mkdir .ssh
chmod 700 .ssh
nano .ssh/authorized_keys
ssh-rsa XXXXXXXXXXXXXXXXXXXX
chmod 600 .ssh/authorized_keys
apt-cache search ca-certificates
mv /tmp/localCA.pem /usr/local/share/ca-certificates/localCA.crt
sudo update-ca-certificates

On the LXC container: CT-MARIADB-GM:

Management > maria-cluster > : > Add node
Name : node1
Host system : ubuntu:22.04 LTS
Segment : 0
SSH Address : xx.xx.xx.xx
SSH Port : 22

On the LXC container: CT-MARIADB-NODE1:

mv /tmp/xxxxxx1.key /etc/ssl/certs/xxxxxx1.key
mv /tmp/xxxxxx1.crt /etc/ssl/certs/xxxxxx1.crt
nano /etc/mysql/mariadb.cnf
[mysqld]
ssl-ca = /usr/local/share/ca-certificates/localCA.crt
ssl-key = /etc/ssl/certs/xxxxx1.key
ssl-cert = /etc/ssl/certs/xxxxx1.crt
service mariadb restart

Then on CT-MARIADB-GM and the CT-MARIADB-NODEX, I simply did an apt-get upgrade to follow the new versions.

@arenner-git
Copy link

I've just updated galera-manager to version 1.8.4 and wondered about the red dot next to the node-icons. I'm not sure if I just never noticed it before or if that's new. Looking at the logs on one of the nodes I found the same messages as you @ketomagasaki

I'm running 2 clusters (1xDev, 1xProd) both configured and managed via the same Galera-Manager-Instance.
Debian 11 is used on all hosts, DB-Engine used is mariadb 10.6

If I can help with any other information, just let me know!

@ServaboFidem
Copy link

Apparently this is still broken. Joy.

@arenner-git
Copy link

Apparently this is still broken. Joy.

yes, it indeed is

@RoyvanEmpel
Copy link

This seems to still be a problem. Is there any progress on a fix? I just installed Ubuntu 22.04 on 4 servers 1 for GM and 3 for db's with the same error after setting up mariadb 10.11.

@byte
Copy link
Contributor

byte commented Oct 25, 2024

Oct 25 23:50:30 ip-172-31-3-3 gma[1923]: time="2024-10-25T23:50:30Z" level=error msg="Error while serving: rpc error: code = Unavailable desc = connection error: desc = "error reading server preface: http2: frame too large"; sleeping for 1 second"

see this on ubuntu 22.04 on the host that GM deployed. It is all in /var/log/syslog - very verbose

@pkamps
Copy link

pkamps commented Nov 7, 2024

Any workarounds available?

@llzzrrdd
Copy link

any updates?

@planetfrontiers
Copy link

Still nothing?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

10 participants