mercredi 29 juin 2016

SELECT * from simple table with 50k records seems to take 40 seconds

Here's my table:

CREATE TABLE public.logs (
  logid bigint NOT NULL DEFAULT nextval('applog_logid_seq'::regclass),
  applicationname character varying(50),
  loglevel character varying(10),
  logmessage character varying(500),
  stacktrace character varying(4096),
  occurredon timestamp without time zone,
  loggedon timestamp without time zone,
  username character varying(50),
  groupname character varying(50),
  useragent character varying(512),
  CONSTRAINT applog_pkey PRIMARY KEY (logid)
);

When I run SELECT *... on it, it takes 40 seconds to return 50000 rows on my local machine. I have the same table on a local install of SQL Server, and that takes less than a second to return the same amount of data.

I'm in the middle of an evaluation of PostgreSQL for our new stack and this is very concerning to me. Why am I doing wrong/why is PostgreSQL so slow?

Edit:

Here's what I get from EXPLAIN (BUFFERS, ANALYZE, TIMING OFF) SELECT * FROM public.logs:

enter image description here

So it looks like the server's going to execute this in about 6 ms. I guess that means all the overhead is in pgAdmin III, but how is SSMS able to do this so much faster?

Aucun commentaire:

Enregistrer un commentaire