Here is the database layout. I have a table with sparse sales over time, aggregated per day. If for an item I have 10 sales on the 01-01-2015, I will have an entry, but If I have 0, then I have no entry. Something like this.
|--------------------------------------|
| day_of_year | year | sales | item_id |
|--------------------------------------|
| 01 | 2015 | 20 | A1 |
| 01 | 2015 | 11 | A2 |
| 07 | 2015 | 09 | A1 |
| ... | ... | ... | ... |
|--------------------------------------|
This is how I get a time series for 1 item.
SELECT doy, max(sales) FROM (
SELECT day_of_year AS doy,
sales AS sales
FROM myschema.entry_daily
WHERE item_id = theNameOfmyItem
AND year = 2015
AND day_of_year < 150
UNION
SELECT doy AS doy,
0 AS sales
FROM generate_series(1, 149) AS doy) as t
GROUP BY doy
ORDER BY doy;
And I currently loop with R making 1 query for every item. I then aggregate the results in a dataframe. But this is very slow. I would actually like to have only one query that would aggregate all the data in the following form.
|----------------------------------------------|
| item_id | 01 | 02 | 03 | 04 | 05 | ... | 149 |
|----------------------------------------------|
| A1 | 10 | 00 | 00 | 05 | 12 | ... | 11 |
| A2 | 11 | 00 | 30 | 01 | 15 | ... | 09 |
| A3 | 20 | 00 | 00 | 05 | 17 | ... | 20 |
| ... |
|----------------------------------------------|
Would this be possible? By the way I am using a Postgres database.