Page access is sometimes good and sometimes bad #331

Closed
opened 2026-02-04 18:34:07 +03:00 by OVERLORD · 2 comments
Owner

Originally created by @litcc on GitHub (Jun 13, 2023).

Error Description

From the login page, the connection is sometimes good and bad, one second the page is normal, and the next second the page starts to spin around for a long time; and sometimes the first time the user logs in, it also reports an error, and then the second time the user can log in normally; occasionally after the successful login, the actual function also exists sometimes good and bad, for example, after adding a card, refreshing the page and finding that the card is missing; when adding a user, there is also the situation that the user is missing. There is a little hint in the logs, but I do not understand what I should do;

planka error :

Troubleshooting tips:
 -> Is your Postgresql configuration correct?  Maybe your `poolSize` configuration is set too high? e.g. If your Postgresql database only supports 20 concurrent connections, you should make sure you have your `poolSize` set as something < 20 (see http://stackoverflow.com/a/27387928/486547). The default `poolSize` is 10. To override default settings, specify the desired properties on the relevant Postgresql "connection" config object where the host/port/database/etc. are configured. If you're using Sails, this is generally located in `config/datastores.js`, or wherever your environment-specific database configuration is set.
 -> Maybe your `poolSize` configuration is set too high? e.g. If your Postgresql database only supports 20 concurrent connections, you should make sure you have your `poolSize` set as something < 20 (see http://stackoverflow.com/a/27387928/486547). The default `poolSize` is 10.
 -> Do you have multiple Sails instances sharing the same Postgresql database? Each Sails instance may use up to the configured `poolSize` # of connections. Assuming all of the Sails instances are just copies of one another (a reasonable best practice) we can calculate the actual # of Postgresql connections used (C) by multiplying the configured `poolSize` (P) by the number of Sails instances (N). If the actual number of connections (C) exceeds the total # of **AVAILABLE** connections to your Postgresql database (V), then you have problems.  If this applies to you, try reducing your `poolSize` configuration. A reasonable `poolSize` setting would be V/N.
 -> Are you using an SSL-enabled Postgresql host like Heroku? Make sure to set `ssl` to `true` (see http://stackoverflow.com/a/22177218/486547)
2023-06-13 00:01:10 [E] Sending 500 ("Server Error") response: 
 Unexpected error from database adapter: `select` failed ("badConnection").  A connection either could not be obtained or there was an error using the connection.
Additional data:
{
  error: Error: getaddrinfo EAI_AGAIN planka-postgres
      at GetAddrInfoReqWrap.onlookup [as oncomplete] (node:dns:107:26) {
    errno: -3001,
    code: 'EAI_AGAIN',
    syscall: 'getaddrinfo',
    hostname: 'planka-postgres'
  },
  meta: {
    adapter: 'sails-postgresql-redacted',
    url: 'postgresql://postgres@planka-postgres/planka',
    identity: 'default'
  }
}
Troubleshooting tips:
 -> Is your Postgresql configuration correct?  Maybe your `poolSize` configuration is set too high? e.g. If your Postgresql database only supports 20 concurrent connections, you should make sure you have your `poolSize` set as something < 20 (see http://stackoverflow.com/a/27387928/486547). The default `poolSize` is 10. To override default settings, specify the desired properties on the relevant Postgresql "connection" config object where the host/port/database/etc. are configured. If you're using Sails, this is generally located in `config/datastores.js`, or wherever your environment-specific database configuration is set.
 -> Maybe your `poolSize` configuration is set too high? e.g. If your Postgresql database only supports 20 concurrent connections, you should make sure you have your `poolSize` set as something < 20 (see http://stackoverflow.com/a/27387928/486547). The default `poolSize` is 10.
 -> Do you have multiple Sails instances sharing the same Postgresql database? Each Sails instance may use up to the configured `poolSize` # of connections. Assuming all of the Sails instances are just copies of one another (a reasonable best practice) we can calculate the actual # of Postgresql connections used (C) by multiplying the configured `poolSize` (P) by the number of Sails instances (N). If the actual number of connections (C) exceeds the total # of **AVAILABLE** connections to your Postgresql database (V), then you have problems.  If this applies to you, try reducing your `poolSize` configuration. A reasonable `poolSize` setting would be V/N.
 -> Are you using an SSL-enabled Postgresql host like Heroku? Make sure to set `ssl` to `true` (see http://stackoverflow.com/a/22177218/486547)
2023-06-13 00:01:38 [E] Sending 500 ("Server Error") response: 
 Unexpected error from database adapter: `select` failed ("badConnection").  A connection either could not be obtained or there was an error using the connection.
Additional data:
{
  error: Error: getaddrinfo EAI_AGAIN planka-postgres
      at GetAddrInfoReqWrap.onlookup [as oncomplete] (node:dns:107:26) {
    errno: -3001,
    code: 'EAI_AGAIN',
    syscall: 'getaddrinfo',
    hostname: 'planka-postgres'
  },
  meta: {
    adapter: 'sails-postgresql-redacted',
    url: 'postgresql://postgres@planka-postgres/planka',
    identity: 'default'
  }
}

run config yml:


---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: planka
  namespace: work-local
  labels:
    k8s.kuboard.cn/name: planka
  annotations: {}
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s.kuboard.cn/name: planka
  template:
    metadata:
      creationTimestamp: null
      labels:
        k8s.kuboard.cn/name: planka
    spec:
      volumes:
        - name: planka-user-avatars
          hostPath:
            path: /data/planka/user-avatars
            type: DirectoryOrCreate
        - name: volume-sksf4
          hostPath:
            path: /data/planka/project-background-images
            type: DirectoryOrCreate
        - name: volume-mfc5s
          hostPath:
            path: /data/planka/attachments
            type: DirectoryOrCreate
      containers:
        - name: planka
          image: 'ghcr.dockerproxy.com/ghcr.io/plankanban/planka:latest'
          args:
            - bash
            - '-c'
            - |-
              for i in `seq 1 30`; do
                  ./start.sh &&
                  s=$()$()? && break || s=$()$()?;
                  echo "Tried $()$(i) times. Waiting 5 seconds...";
                  sleep 5;
                done; (exit $()$(s))
          ports:
            - name: planka-port
              containerPort: 1337
              protocol: TCP
          env:
            - name: BASE_URL
              value: 'http://10.0.0.172:31337'
            - name: DATABASE_URL
              value: 'postgresql://postgres@planka-postgres/planka'
            - name: SECRET_KEY
              value: notsecretkey
            - name: TRUST_PROXY
              value: '1'
            - name: NODE_ENV
              value: production
          resources: {}
          volumeMounts:
            - name: planka-user-avatars
              mountPath: /app/public/user-avatars
            - name: volume-sksf4
              mountPath: /app/public/project-background-images
            - name: volume-mfc5s
              mountPath: /app/private/attachments
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          imagePullPolicy: IfNotPresent
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      dnsPolicy: ClusterFirst
      nodeName: edge-node-work-e88758b4
      securityContext: {}
      schedulerName: default-scheduler
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 25%
      maxSurge: 25%
  minReadySeconds: 1
  revisionHistoryLimit: 10
  progressDeadlineSeconds: 20

---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: planka-postgres
  namespace: work-local
  labels:
    k8s.kuboard.cn/name: planka-postgres
  annotations: {}
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s.kuboard.cn/name: planka-postgres
  template:
    metadata:
      creationTimestamp: null
      labels:
        k8s.kuboard.cn/name: planka-postgres
      annotations:
        kubectl.kubernetes.io/restartedAt: '2023-06-12T18:55:11+08:00'
    spec:
      volumes:
        - name: db-data
          hostPath:
            path: /data/planka-postgres
            type: DirectoryOrCreate
      containers:
        - name: planka-postgres
          image: 'postgres:14-alpine'
          ports:
            - name: planka-db-port
              containerPort: 5432
              protocol: TCP
          env:
            - name: POSTGRES_DB
              value: planka
            - name: POSTGRES_HOST_AUTH_METHOD
              value: trust
          resources: {}
          volumeMounts:
            - name: db-data
              mountPath: /var/lib/postgresql/data
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          imagePullPolicy: IfNotPresent
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      dnsPolicy: ClusterFirst
      nodeName: edge-node-work-e88758b4
      securityContext: {}
      schedulerName: default-scheduler
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 25%
      maxSurge: 25%
  minReadySeconds: 1
  revisionHistoryLimit: 10
  progressDeadlineSeconds: 20

---
kind: Service
apiVersion: v1
metadata:
  name: planka
  namespace: work-local
  labels:
    k8s.kuboard.cn/name: planka
spec:
  ports:
    - name: fzkamw
      protocol: TCP
      port: 1337
      targetPort: 1337
      nodePort: 31337
  selector:
    k8s.kuboard.cn/name: planka
  type: NodePort
  sessionAffinity: None
  externalTrafficPolicy: Cluster
  ipFamilies:
    - IPv4
  ipFamilyPolicy: SingleStack
  internalTrafficPolicy: Cluster

---
kind: Service
apiVersion: v1
metadata:
  name: planka-postgres
  namespace: work-local
  labels:
    k8s.kuboard.cn/name: planka-postgres
spec:
  ports:
    - name: rrexfx
      protocol: TCP
      port: 5432
      targetPort: 5432
  selector:
    k8s.kuboard.cn/name: planka-postgres
  type: ClusterIP
  sessionAffinity: None
  ipFamilies:
    - IPv4
  ipFamilyPolicy: SingleStack
  internalTrafficPolicy: Cluster


Originally created by @litcc on GitHub (Jun 13, 2023). ## Error Description From the login page, the connection is sometimes good and bad, one second the page is normal, and the next second the page starts to spin around for a long time; and sometimes the first time the user logs in, it also reports an error, and then the second time the user can log in normally; occasionally after the successful login, the actual function also exists sometimes good and bad, for example, after adding a card, refreshing the page and finding that the card is missing; when adding a user, there is also the situation that the user is missing. There is a little hint in the logs, but I do not understand what I should do; ## planka error : ```log Troubleshooting tips: -> Is your Postgresql configuration correct? Maybe your `poolSize` configuration is set too high? e.g. If your Postgresql database only supports 20 concurrent connections, you should make sure you have your `poolSize` set as something < 20 (see http://stackoverflow.com/a/27387928/486547). The default `poolSize` is 10. To override default settings, specify the desired properties on the relevant Postgresql "connection" config object where the host/port/database/etc. are configured. If you're using Sails, this is generally located in `config/datastores.js`, or wherever your environment-specific database configuration is set. -> Maybe your `poolSize` configuration is set too high? e.g. If your Postgresql database only supports 20 concurrent connections, you should make sure you have your `poolSize` set as something < 20 (see http://stackoverflow.com/a/27387928/486547). The default `poolSize` is 10. -> Do you have multiple Sails instances sharing the same Postgresql database? Each Sails instance may use up to the configured `poolSize` # of connections. Assuming all of the Sails instances are just copies of one another (a reasonable best practice) we can calculate the actual # of Postgresql connections used (C) by multiplying the configured `poolSize` (P) by the number of Sails instances (N). If the actual number of connections (C) exceeds the total # of **AVAILABLE** connections to your Postgresql database (V), then you have problems. If this applies to you, try reducing your `poolSize` configuration. A reasonable `poolSize` setting would be V/N. -> Are you using an SSL-enabled Postgresql host like Heroku? Make sure to set `ssl` to `true` (see http://stackoverflow.com/a/22177218/486547) 2023-06-13 00:01:10 [E] Sending 500 ("Server Error") response: Unexpected error from database adapter: `select` failed ("badConnection"). A connection either could not be obtained or there was an error using the connection. Additional data: { error: Error: getaddrinfo EAI_AGAIN planka-postgres at GetAddrInfoReqWrap.onlookup [as oncomplete] (node:dns:107:26) { errno: -3001, code: 'EAI_AGAIN', syscall: 'getaddrinfo', hostname: 'planka-postgres' }, meta: { adapter: 'sails-postgresql-redacted', url: 'postgresql://postgres@planka-postgres/planka', identity: 'default' } } Troubleshooting tips: -> Is your Postgresql configuration correct? Maybe your `poolSize` configuration is set too high? e.g. If your Postgresql database only supports 20 concurrent connections, you should make sure you have your `poolSize` set as something < 20 (see http://stackoverflow.com/a/27387928/486547). The default `poolSize` is 10. To override default settings, specify the desired properties on the relevant Postgresql "connection" config object where the host/port/database/etc. are configured. If you're using Sails, this is generally located in `config/datastores.js`, or wherever your environment-specific database configuration is set. -> Maybe your `poolSize` configuration is set too high? e.g. If your Postgresql database only supports 20 concurrent connections, you should make sure you have your `poolSize` set as something < 20 (see http://stackoverflow.com/a/27387928/486547). The default `poolSize` is 10. -> Do you have multiple Sails instances sharing the same Postgresql database? Each Sails instance may use up to the configured `poolSize` # of connections. Assuming all of the Sails instances are just copies of one another (a reasonable best practice) we can calculate the actual # of Postgresql connections used (C) by multiplying the configured `poolSize` (P) by the number of Sails instances (N). If the actual number of connections (C) exceeds the total # of **AVAILABLE** connections to your Postgresql database (V), then you have problems. If this applies to you, try reducing your `poolSize` configuration. A reasonable `poolSize` setting would be V/N. -> Are you using an SSL-enabled Postgresql host like Heroku? Make sure to set `ssl` to `true` (see http://stackoverflow.com/a/22177218/486547) 2023-06-13 00:01:38 [E] Sending 500 ("Server Error") response: Unexpected error from database adapter: `select` failed ("badConnection"). A connection either could not be obtained or there was an error using the connection. Additional data: { error: Error: getaddrinfo EAI_AGAIN planka-postgres at GetAddrInfoReqWrap.onlookup [as oncomplete] (node:dns:107:26) { errno: -3001, code: 'EAI_AGAIN', syscall: 'getaddrinfo', hostname: 'planka-postgres' }, meta: { adapter: 'sails-postgresql-redacted', url: 'postgresql://postgres@planka-postgres/planka', identity: 'default' } } ``` ## run config yml: ```yml --- kind: Deployment apiVersion: apps/v1 metadata: name: planka namespace: work-local labels: k8s.kuboard.cn/name: planka annotations: {} spec: replicas: 1 selector: matchLabels: k8s.kuboard.cn/name: planka template: metadata: creationTimestamp: null labels: k8s.kuboard.cn/name: planka spec: volumes: - name: planka-user-avatars hostPath: path: /data/planka/user-avatars type: DirectoryOrCreate - name: volume-sksf4 hostPath: path: /data/planka/project-background-images type: DirectoryOrCreate - name: volume-mfc5s hostPath: path: /data/planka/attachments type: DirectoryOrCreate containers: - name: planka image: 'ghcr.dockerproxy.com/ghcr.io/plankanban/planka:latest' args: - bash - '-c' - |- for i in `seq 1 30`; do ./start.sh && s=$()$()? && break || s=$()$()?; echo "Tried $()$(i) times. Waiting 5 seconds..."; sleep 5; done; (exit $()$(s)) ports: - name: planka-port containerPort: 1337 protocol: TCP env: - name: BASE_URL value: 'http://10.0.0.172:31337' - name: DATABASE_URL value: 'postgresql://postgres@planka-postgres/planka' - name: SECRET_KEY value: notsecretkey - name: TRUST_PROXY value: '1' - name: NODE_ENV value: production resources: {} volumeMounts: - name: planka-user-avatars mountPath: /app/public/user-avatars - name: volume-sksf4 mountPath: /app/public/project-background-images - name: volume-mfc5s mountPath: /app/private/attachments terminationMessagePath: /dev/termination-log terminationMessagePolicy: File imagePullPolicy: IfNotPresent restartPolicy: Always terminationGracePeriodSeconds: 30 dnsPolicy: ClusterFirst nodeName: edge-node-work-e88758b4 securityContext: {} schedulerName: default-scheduler strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 25% maxSurge: 25% minReadySeconds: 1 revisionHistoryLimit: 10 progressDeadlineSeconds: 20 --- kind: Deployment apiVersion: apps/v1 metadata: name: planka-postgres namespace: work-local labels: k8s.kuboard.cn/name: planka-postgres annotations: {} spec: replicas: 1 selector: matchLabels: k8s.kuboard.cn/name: planka-postgres template: metadata: creationTimestamp: null labels: k8s.kuboard.cn/name: planka-postgres annotations: kubectl.kubernetes.io/restartedAt: '2023-06-12T18:55:11+08:00' spec: volumes: - name: db-data hostPath: path: /data/planka-postgres type: DirectoryOrCreate containers: - name: planka-postgres image: 'postgres:14-alpine' ports: - name: planka-db-port containerPort: 5432 protocol: TCP env: - name: POSTGRES_DB value: planka - name: POSTGRES_HOST_AUTH_METHOD value: trust resources: {} volumeMounts: - name: db-data mountPath: /var/lib/postgresql/data terminationMessagePath: /dev/termination-log terminationMessagePolicy: File imagePullPolicy: IfNotPresent restartPolicy: Always terminationGracePeriodSeconds: 30 dnsPolicy: ClusterFirst nodeName: edge-node-work-e88758b4 securityContext: {} schedulerName: default-scheduler strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 25% maxSurge: 25% minReadySeconds: 1 revisionHistoryLimit: 10 progressDeadlineSeconds: 20 --- kind: Service apiVersion: v1 metadata: name: planka namespace: work-local labels: k8s.kuboard.cn/name: planka spec: ports: - name: fzkamw protocol: TCP port: 1337 targetPort: 1337 nodePort: 31337 selector: k8s.kuboard.cn/name: planka type: NodePort sessionAffinity: None externalTrafficPolicy: Cluster ipFamilies: - IPv4 ipFamilyPolicy: SingleStack internalTrafficPolicy: Cluster --- kind: Service apiVersion: v1 metadata: name: planka-postgres namespace: work-local labels: k8s.kuboard.cn/name: planka-postgres spec: ports: - name: rrexfx protocol: TCP port: 5432 targetPort: 5432 selector: k8s.kuboard.cn/name: planka-postgres type: ClusterIP sessionAffinity: None ipFamilies: - IPv4 ipFamilyPolicy: SingleStack internalTrafficPolicy: Cluster ```
Author
Owner

@litcc commented on GitHub (Jun 13, 2023):

The problem seems to be related to dns, when I specify the specific service ip, the problem is solved

@litcc commented on GitHub (Jun 13, 2023): The problem seems to be related to dns, when I specify the specific service ip, the problem is solved
Author
Owner

@litcc commented on GitHub (Jun 13, 2023):

Initially concluded that the cluster network problem, the wrong issues

@litcc commented on GitHub (Jun 13, 2023): Initially concluded that the cluster network problem, the wrong issues
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: starred/planka#331