I am trying to self host Piped and I followed all instructions in the docs for AIO with nginx script. I set up subdomain domains as needed and added ssl certificates with nginx certbot. When I go to yt.mydomain.tld, I am getting 400 nginx page. I only changed the port from 8080
to 3001
in the generated docker compose file. I can’t use port 8080 because it is in use by another service. I also changed the listening port in config.properties file to 3001
.
Here are the logs for piped-backend:
Oct 23, 2023 1:44:54 AM liquibase.database
INFO: Set default schema name to public
Oct 23, 2023 1:44:54 AM liquibase.changelog
INFO: Reading from public.databasechangelog
Database is up to date, no changesets to execute
Oct 23, 2023 1:44:54 AM liquibase.changelog
INFO: Reading from public.databasechangelog
Oct 23, 2023 1:44:54 AM liquibase.util
INFO: UPDATE SUMMARY
Oct 23, 2023 1:44:54 AM liquibase.util
INFO: Run: 0
Oct 23, 2023 1:44:54 AM liquibase.util
INFO: Previously run: 3
Oct 23, 2023 1:44:54 AM liquibase.util
INFO: Filtered out: 0
Oct 23, 2023 1:44:54 AM liquibase.util
INFO: -------------------------------
Oct 23, 2023 1:44:54 AM liquibase.util
INFO: Total change sets: 3
UPDATE SUMMARY
Run: 0
Previously run: 3
Filtered out: 0
-------------------------------
Total change sets: 3
Oct 23, 2023 1:44:54 AM liquibase.util
INFO: Update summary generated
Oct 23, 2023 1:44:54 AM liquibase.lockservice
INFO: Successfully released change log lock
Oct 23, 2023 1:44:54 AM liquibase.command
INFO: Command execution complete
ThrottlingCache: 0 entries
SLF4J: No SLF4J providers were found.
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See https://www.slf4j.org/codes.html#noProviders for further details.
Oct 23, 2023 1:44:54 AM org.hibernate.Version logVersion
INFO: HHH000412: Hibernate ORM core version [WORKING]
Oct 23, 2023 1:44:54 AM org.hibernate.cache.internal.RegionFactoryInitiator initiateService
INFO: HHH000026: Second-level cache disabled
Oct 23, 2023 1:44:54 AM org.hibernate.engine.jdbc.connections.internal.ConnectionProviderInitiator initiateService
INFO: HHH000130: Instantiating explicit connection provider: org.hibernate.hikaricp.internal.HikariCPConnectionProvider
Oct 23, 2023 1:44:54 AM org.hibernate.engine.jdbc.dialect.internal.DialectFactoryImpl constructDialect
WARN: HHH90000025: PostgreSQLDialect does not need to be specified explicitly using 'hibernate.dialect' (remove the property setting and it will be selected by default)
Logged in as user: null
Room ID: [possibly-private]:matrix.org
Filter ID: null
Oct 23, 2023 1:44:55 AM org.hibernate.engine.transaction.jta.platform.internal.JtaPlatformInitiator initiateService
INFO: HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration)
Database connection is ready!
Cleanup: Removed 0 old videos
PubSub: queue size - 0 channels
NGINX container logs:
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Sourcing /docker-entrypoint.d/15-local-resolvers.envsh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2023/10/23 01:44:54 [notice] 1#1: using the "epoll" event method
2023/10/23 01:44:54 [notice] 1#1: nginx/1.25.2
2023/10/23 01:44:54 [notice] 1#1: built by gcc 12.2.1 20220924 (Alpine 12.2.1_git20220924-r10)
2023/10/23 01:44:54 [notice] 1#1: OS: Linux 5.4.0-163-generic
2023/10/23 01:44:54 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2023/10/23 01:44:54 [notice] 1#1: start worker processes
2023/10/23 01:44:54 [notice] 1#1: start worker process 30
2023/10/23 01:44:54 [notice] 1#1: start worker process 31
2023/10/23 01:44:54 [notice] 1#1: start worker process 32
2023/10/23 01:44:54 [notice] 1#1: start worker process 33
2023/10/23 01:44:54 [notice] 1#1: start worker process 34
2023/10/23 01:44:54 [notice] 1#1: start worker process 38
2023/10/23 01:44:54 [notice] 1#1: start worker process 48
2023/10/23 01:44:54 [notice] 1#1: start worker process 66
2023/10/23 01:44:54 [notice] 1#1: start worker process 87
2023/10/23 01:44:54 [notice] 1#1: start worker process 106
2023/10/23 01:44:54 [notice] 1#1: start worker process 126
2023/10/23 01:44:54 [notice] 1#1: start worker process 142
2023/10/23 01:44:54 [notice] 1#1: start cache manager process 158
2023/10/23 01:44:54 [notice] 1#1: start cache loader process 178
2023/10/23 01:45:54 [notice] 178#178: http file cache: /tmp/pipedapi_cache 0.000M, bsize: 4096
2023/10/23 01:45:54 [notice] 1#1: signal 17 (SIGCHLD) received from 178
2023/10/23 01:45:54 [notice] 1#1: cache loader process 178 exited with code 0
2023/10/23 01:45:54 [notice] 1#1: signal 29 (SIGIO) received
172.27.0.1 - - [23/Oct/2023:01:46:36 +0000] "GET / HTTP/1.1" 400 157 "-" "-" "84.252.113.1, 84.252.113.1"
You changed too many options without knowing what they do and now its completely broken.
Wipe it and start fresh again.
nginx: image: nginx:mainline-alpine restart: unless-stopped ports: - "3001:80" [...] labels: - "traefik.http.services.piped.loadbalancer.server.port=8080"
As someone already pointed out, this wont work. The Traefik label needs to point at the internal container port of the webservice. The “3001:80” mapping could be removed completely when youre using a reverse proxy with Docker networking, it only applies to the Docker host.
Speaking of networking, where is your Traefik actually running? I dont see it listed in the compose, and if its running as another container on the same host, you need to make the webservice (nginx) a member of Traefiks Docker network, which is not mentioned either in the compose. So how would Traefik connect to nginx? If Traefik is not running on the same Docker host, then those labels are useless anyway and do nothing.
I can’t use port 8080 because it is in use by another service.
Nothing needs to use port 8080 at all. The only thing that is needed is Traefik running on 80/443 and using Docker networking to reach the webservice that should get proxied (nginx). There are no port conflicts at all then.
I also changed the listening port in config.properties file to 3001.
Why? If you change the listening port for the piped frontend then none of the other ports will match anymore (nginx). Leave it as it is.
TL;DR stop changing too many things when you dont know how they work.
Refer to the piped documentation, their Github page to ask for help, the Traefik and nginx documentation and ask /r/Docker for support about Docker specific issues. This here is not a techsupport-for-every-possible-software subreddit.
Thanks for your reply. I reset everything and started from scratch. This time I only changed the database password and nothing else. I am still getting the same 400 error.
I reset everything and started from scratch.
So I started again, and this time just left everything as is by default
I had include proxy_params when it shouldn’t have been there.
So you messed up atleast twice by not really starting fresh each time.
Well atleast it works now, maybe you learned a bit from it.
This here is not a techsupport-for-every-software subreddit.
Also config/config.properties needs changing
Yes, I changed it as well.
First thing I see, you have the port forwarded 3001 to 80. Yet traefik is looking at 8080