How to take average of averages in ssrs - reporting-services

I have to take average of averages in my report.
Earlier quarterly average was taken using sum of all values in a quarter divided by number of days in that quarter. Now, I need to take average of monthly averages as the quarterly average.
How shall I do it?

I don't think this is possible in SSRS. Problem you have is that although you can specify a scope for an aggregation, you cannot specify two scopes.
What you need to tell SSRS to do is something like this
-- THIS WILL NOT WORK
=AVG(AVG(Fields!myField.Value, "MonthRowGroupName"), "ToCurrencyColumnGroupName")
As I said, SSRS does not allow this so I typically just do this kind of thing in the dataset query as it's quite simple.
Below is some sample data which should roughly match what you have..
DROP TABLE IF EXISTS #t
CREATE TABLE #t (Q int, M int, D int, CurF varchar(10), CurT varchar(10), XR float)
INSERT INTO #t VALUES (1,1,1, 'USD', 'EUR', 1.2), (1,1,2, 'USD', 'EUR', 1.25), (1,1,3, 'USD', 'EUR', 1.28), (1,1,1, 'USD', 'GBP', 1.1), (1,1,2, 'USD', 'GBP', 1.15), (1,1,3, 'USD', 'GBP', 1.18),
(1,2,1, 'USD', 'EUR', 1.3), (1,3,2, 'USD', 'EUR', 1.35), (1,2,3, 'USD', 'EUR', 1.38), (1,2,1, 'USD', 'GBP', 1.4), (1,2,2, 'USD', 'GBP', 1.45), (1,2,3, 'USD', 'GBP', 1.48),
(1,3,1, 'USD', 'EUR', 1), (1,3,2, 'USD', 'EUR', 1), (1,3,3, 'USD', 'EUR', 1), (1,3,1, 'USD', 'GBP', 1), (1,3,2, 'USD', 'GBP', 1), (1,3,3, 'USD', 'GBP', 1),
(2,1,1, 'USD', 'EUR', 1.3), (2,1,2, 'USD', 'EUR', 1.35), (2,1,3, 'USD', 'EUR', 1.38), (2,1,1, 'USD', 'GBP', 1.4), (2,1,2, 'USD', 'GBP', 1.45), (2,1,3, 'USD', 'GBP', 1.48),
(2,2,1, 'USD', 'EUR', 1.5), (2,3,2, 'USD', 'EUR', 1.55), (2,2,3, 'USD', 'EUR', 1.58), (2,2,1, 'USD', 'GBP', 1.5), (2,2,2, 'USD', 'GBP', 1.55), (2,2,3, 'USD', 'GBP', 1.58),
(2,3,1, 'USD', 'EUR', 1.7), (2,3,2, 'USD', 'EUR', 1.75), (2,3,3, 'USD', 'EUR', 1.8), (2,3,1, 'USD', 'GBP', 1.85), (2,3,2, 'USD', 'GBP', 1.9), (2,3,3, 'USD', 'GBP', 1.95),
(1,1,1, 'PES', 'EUR', 21.2), (1,1,2, 'PES', 'EUR', 21.25), (1,1,3, 'PES', 'EUR', 21.28), (1,1,1, 'PES', 'GBP', 21.1), (1,1,2, 'PES', 'GBP', 21.15), (1,1,3, 'PES', 'GBP', 21.18),
(1,2,1, 'PES', 'EUR', 21.3), (1,3,2, 'PES', 'EUR', 21.35), (1,2,3, 'PES', 'EUR', 21.38), (1,2,1, 'PES', 'GBP', 21.4), (1,2,2, 'PES', 'GBP', 21.45), (1,2,3, 'PES', 'GBP', 21.48),
(1,3,1, 'PES', 'EUR', 21), (1,3,2, 'PES', 'EUR', 21), (1,3,3, 'PES', 'EUR', 21), (1,3,1, 'PES', 'GBP', 21), (1,3,2, 'PES', 'GBP', 21), (1,3,3, 'PES', 'GBP', 21),
(2,1,1, 'PES', 'EUR', 21.3), (2,1,2, 'PES', 'EUR', 21.35), (2,1,3, 'PES', 'EUR', 21.38), (2,1,1, 'PES', 'GBP', 21.4), (2,1,2, 'PES', 'GBP', 21.45), (2,1,3, 'PES', 'GBP', 21.48),
(2,2,1, 'PES', 'EUR', 21.5), (2,3,2, 'PES', 'EUR', 21.55), (2,2,3, 'PES', 'EUR', 21.58), (2,2,1, 'PES', 'GBP', 21.5), (2,2,2, 'PES', 'GBP', 21.55), (2,2,3, 'PES', 'GBP', 21.58),
(2,3,1, 'PES', 'EUR', 21.7), (2,3,2, 'PES', 'EUR', 21.75), (2,3,3, 'PES', 'EUR', 21.8), (2,3,1, 'PES', 'GBP', 21.85), (2,3,2, 'PES', 'GBP', 21.9), (2,3,3, 'PES', 'GBP', 21.95)
SELECT
*, QtrAvgOfMonthAvg = AVG(MonthAvg) OVER(PARTITION BY Q, CurF, CurT)
FROM (
SELECT DISTINCT Q, M, CurF, CurT, MonthAvg = AVG(XR) OVER(PARTITION BY Q, M, CurF, CurT)
FROM #t
) MAvg
The inner query (aliased MAvg) gets the monthly averages.
The outer query then gets the quarterly average. As this Quarterly average is repeated on each row, in SSRS we can use AVG, FIRST, MAX or MIN to just get a single instance of the number.
Here's the basic report design
Note that the SUM part of [SUM(MonthAvg)] does not matter, we could have used AVG, Min, Max etc as there is only a single number present at this scope (row and column).
and this is the result

Related

in Python 3, how can I slice JSON data where objects all start with same name?

I have a JSON string that returns device info and if devices are found, the devices will be listed as device0, device1, device2, etc. In this simple code below, how can I discover all devices found in the JSON and then print the the info below for each device? I currently lookup each device statically and I want this discovery to be dynamic and print the results for each one found.
r1 = requests.get(url = url_api, params = PARAMS)
devicedata = r1.json()
if 'device0' in devicedata:
print('')
device0Name = (devicedata['device0']['device_name'])
print(device0Name)
print('Temp: {}'.format (devicedata['device0']['obs'][0]['ambient_temp']))
print('Probe Temp: {}'.format (devicedata['device0']['obs'][0]['probe_temp']))
print('Humidity: {}%'.format (devicedata['device0']['obs'][0]['humidity']))
print('')
# JSON info looks like this...
{'device0': {'success': True, 'device_type': 'TX60', 'obs': [{'device_id': '1111', 'device_type': 'TX60', 'u_timestamp': '1580361017', 'ambient_temp': '45.7', 'probe_temp': '45.5', 'humidity': '82', 'linkquality': '100', 'lowbattery': '0', 'success': '9', 's_interval': '99', 'timestamp': '1/29/2020 11:10 PM', 'utctime': 1580361017}], 'alerts': {'miss': {'id': '520831', 'alert_type': 'miss', 's_id': '1111', 'max': '-100', 'min': '30', 'wet': '0', 'alert_id': '1', 'phone': 'yes', 'email': '', 'state': None}, 'batt': {'id': '520832', 'alert_type': 'batt', 's_id': '1111', 'max': '-100', 'min': '-100', 'wet': '0', 'alert_id': '1', 'phone': 'yes', 'email': '', 'state': None}}, 'ispws': 0, 'unit': {'temp': '°F', 'temp2': '°F', 'rh': '%'}, 'device_id': '1111', 'expired': '0', 'interval': '30', 'reg_date': '2020-01-17 22:06:48', 'create_date': 1579298808, 'device_name': 'Back Yard', 'assocGateway': '1', 'problem': False}, 'device1': {'success': True, 'device_type': 'TX60', 'obs': [{'device_id': '2222', 'device_type': 'TX60', 'u_timestamp': '1580360303', 'ambient_temp': '63.6', 'probe_temp': 'N/C', 'humidity': '64', 'linkquality': '100', 'lowbattery': '0', 'success': '9', 's_interval': '99', 'timestamp': '1/29/2020 10:58 PM', 'utctime': 1580360303}], 'alerts': {'miss': {'id': '520220', 'alert_type': 'miss', 's_id': '2222', 'max': '-100', 'min': '30', 'wet': '0', 'alert_id': '1', 'phone': 'yes', 'email': '', 'state': None}, 'batt': {'id': '520221', 'alert_type': 'batt', 's_id': '2222', 'max': '-100', 'min': '-100', 'wet': '0', 'alert_id': '1', 'phone': 'yes', 'email': '', 'state': None}}, 'ispws': 0, 'unit': {'temp': '°F', 'temp2': '°F', 'rh': '%'}, 'device_id': '3333', 'expired': '1', 'interval': '30', 'reg_date': '2016-03-19 01:45:04', 'create_date': 1500868369, 'device_name': 'Crawl Space', 'assocGateway': '1', 'problem': False}, 'device2': {'success': True, 'device_type': 'TX60', 'obs': [{'device_id': '3333', 'device_type': 'TX60', 'u_timestamp': '1580360195', 'ambient_temp': '70.2', 'probe_temp': 'N/C', 'humidity': '48', 'linkquality': '100', 'lowbattery': '0', 'success': '9', 's_interval': '99', 'timestamp': '1/29/2020 10:56 PM', 'utctime': 1580360195}], 'alerts': None, 'ispws': 0, 'unit': {'temp': '°F', 'temp2': '°F', 'rh': '%'}, 'device_id': '3333', 'expired': '0', 'interval': '15', 'reg_date': '2020-01-30 04:34:00', 'create_date': 1580358840, 'device_name': 'Basement', 'assocGateway': '2', 'problem': False}, 'tz': 'America/Chicago'}
The output for a single device looks like this..
Back Yard
Temp: 50.9
Probe Temp: 51.2
Humidity: 92%
Crawl Space
Temp: 65.4
Probe Temp: N/C
Humidity: 55%
Basement
Temp: 70
Probe Temp: N/C
Humidity: 48%
Found it.
for devKey in devicedata.keys():
if "device" in devKey:
dev = devicedata[devKey]
name = dev["device_name"]
obs = dev["obs"][0]
temp = obs["ambient_temp"]
probeTemp = obs["probe_temp"]
humidity = obs["humidity"]
print(name)
print('Temp: {}'.format(temp))
print('Probe Temp: {}'.format(probeTemp))
print('Humidity: {}%'.format(humidity))
print('')

List of languages in JSON format (language as a key and code as a value) [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
If I just can get the list of languages in following JSON format:
{
"english": "en",
"german": "de",
"greek": "el",
}
Thanks.
{'Abkhaz': 'ab',
'Afar': 'aa',
'Afrikaans': 'af',
'Akan': 'ak',
'Albanian': 'sq',
'Amharic': 'am',
'Arabic': 'ar',
'Aragonese': 'an',
'Armenian': 'hy',
'Assamese': 'as',
'Avaric': 'av',
'Avestan': 'ae',
'Aymara': 'ay',
'Azerbaijani': 'az',
'Bambara': 'bm',
'Bashkir': 'ba',
'Basque': 'eu',
'Belarusian': 'be',
'Bengali': 'bn',
'Bihari': 'bh',
'Bislama': 'bi',
'Bosnian': 'bs',
'Breton': 'br',
'Bulgarian': 'bg',
'Burmese': 'my',
'Catalan; Valencian': 'ca',
'Chamorro': 'ch',
'Chechen': 'ce',
'Chichewa; Chewa; Nyanja': 'ny',
'Chinese': 'zh',
'Chuvash': 'cv',
'Cornish': 'kw',
'Corsican': 'co',
'Cree': 'cr',
'Croatian': 'hr',
'Czech': 'cs',
'Danish': 'da',
'Divehi; Dhivehi; Maldivian;': 'dv',
'Dutch': 'nl',
'English': 'en',
'Esperanto': 'eo',
'Estonian': 'et',
'Ewe': 'ee',
'Faroese': 'fo',
'Fijian': 'fj',
'Finnish': 'fi',
'French': 'fr',
'Fula; Fulah; Pulaar; Pular': 'ff',
'Galician': 'gl',
'Georgian': 'ka',
'German': 'de',
'Greek, Modern': 'el',
'Guaraní': 'gn',
'Gujarati': 'gu',
'Haitian; Haitian Creole': 'ht',
'Hausa': 'ha',
'Hebrew (modern)': 'he',
'Herero': 'hz',
'Hindi': 'hi',
'Hiri Motu': 'ho',
'Hungarian': 'hu',
'Interlingua': 'ia',
'Indonesian': 'id',
'Interlingue': 'ie',
'Irish': 'ga',
'Igbo': 'ig',
'Inupiaq': 'ik',
'Ido': 'io',
'Icelandic': 'is',
'Italian': 'it',
'Inuktitut': 'iu',
'Japanese': 'ja',
'Javanese': 'jv',
'Kalaallisut, Greenlandic': 'kl',
'Kannada': 'kn',
'Kanuri': 'kr',
'Kashmiri': 'ks',
'Kazakh': 'kk',
'Khmer': 'km',
'Kikuyu, Gikuyu': 'ki',
'Kinyarwanda': 'rw',
'Kirghiz, Kyrgyz': 'ky',
'Komi': 'kv',
'Kongo': 'kg',
'Korean': 'ko',
'Kurdish': 'ku',
'Kwanyama, Kuanyama': 'kj',
'Latin': 'la',
'Luxembourgish, Letzeburgesch': 'lb',
'Luganda': 'lg',
'Limburgish, Limburgan, Limburger': 'li',
'Lingala': 'ln',
'Lao': 'lo',
'Lithuanian': 'lt',
'Luba-Katanga': 'lu',
'Latvian': 'lv',
'Manx': 'gv',
'Macedonian': 'mk',
'Malagasy': 'mg',
'Malay': 'ms',
'Malayalam': 'ml',
'Maltese': 'mt',
'Māori': 'mi',
'Marathi (Marāṭhī)': 'mr',
'Marshallese': 'mh',
'Mongolian': 'mn',
'Nauru': 'na',
'Navajo, Navaho': 'nv',
'Norwegian Bokmål': 'nb',
'North Ndebele': 'nd',
'Nepali': 'ne',
'Ndonga': 'ng',
'Norwegian Nynorsk': 'nn',
'Norwegian': 'no',
'Nuosu': 'ii',
'South Ndebele': 'nr',
'Occitan': 'oc',
'Ojibwe, Ojibwa': 'oj',
'Old Church Slavonic, Church Slavic, Church Slavonic, Old Bulgarian, Old Slavonic': 'cu',
'Oromo': 'om',
'Oriya': 'or',
'Ossetian, Ossetic': 'os',
'Panjabi, Punjabi': 'pa',
'Pāli': 'pi',
'Persian': 'fa',
'Polish': 'pl',
'Pashto, Pushto': 'ps',
'Portuguese': 'pt',
'Quechua': 'qu',
'Romansh': 'rm',
'Kirundi': 'rn',
'Romanian, Moldavian, Moldovan': 'ro',
'Russian': 'ru',
'Sanskrit (Saṁskṛta)': 'sa',
'Sardinian': 'sc',
'Sindhi': 'sd',
'Northern Sami': 'se',
'Samoan': 'sm',
'Sango': 'sg',
'Serbian': 'sr',
'Scottish Gaelic; Gaelic': 'gd',
'Shona': 'sn',
'Sinhala, Sinhalese': 'si',
'Slovak': 'sk',
'Slovene': 'sl',
'Somali': 'so',
'Southern Sotho': 'st',
'Spanish; Castilian': 'es',
'Sundanese': 'su',
'Swahili': 'sw',
'Swati': 'ss',
'Swedish': 'sv',
'Tamil': 'ta',
'Telugu': 'te',
'Tajik': 'tg',
'Thai': 'th',
'Tigrinya': 'ti',
'Tibetan Standard, Tibetan, Central': 'bo',
'Turkmen': 'tk',
'Tagalog': 'tl',
'Tswana': 'tn',
'Tonga (Tonga Islands)': 'to',
'Turkish': 'tr',
'Tsonga': 'ts',
'Tatar': 'tt',
'Twi': 'tw',
'Tahitian': 'ty',
'Uighur, Uyghur': 'ug',
'Ukrainian': 'uk',
'Urdu': 'ur',
'Uzbek': 'uz',
'Venda': 've',
'Vietnamese': 'vi',
'Volapük': 'vo',
'Walloon': 'wa',
'Welsh': 'cy',
'Wolof': 'wo',
'Western Frisian': 'fy',
'Xhosa': 'xh',
'Yiddish': 'yi',
'Yoruba': 'yo',
'Zhuang, Chuang': 'za'}

Get previous month and this month sums in one query

I am trying to merge two queries into one
My data:
CREATE TABLE `przychody` (
`id_przychodu` bigint(20) NOT NULL,
`id_rejonu` bigint(20) NOT NULL,
`fk_kontrahent` bigint(20) NOT NULL,
`dodal` bigint(20) NOT NULL,
`wartosc` decimal(10,2) NOT NULL,
`netto` decimal(10,2) NOT NULL,
`numer` varchar(100) COLLATE utf8_unicode_ci NOT NULL,
`z_dnia` date NOT NULL,
`dodano` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP,
`uwagi` varchar(250) COLLATE utf8_unicode_ci NOT NULL,
`vat_lacznie` decimal(11,2) NOT NULL,
`sprzedano` date NOT NULL,
`termin_platnosci` date NOT NULL,
`ilosc_dni` int(11) NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
INSERT INTO `przychody` (`id_przychodu`, `id_rejonu`, `fk_kontrahent`, `dodal`, `wartosc`, `netto`, `numer`, `z_dnia`, `dodano`, `uwagi`, `vat_lacznie`, `sprzedano`, `termin_platnosci`, `ilosc_dni`) VALUES
(48, 1, 189, 3, '172.20', '140.00', '1/KOM/10/17', '2017-10-03', '2017-10-03 16:13:21', '', '32.20', '2017-10-03', '2017-11-02', 30),
(49, 1, 189, 3, '422.44', '422.44', '2/KOM/10/17', '2017-10-03', '2017-10-03 16:15:35', 'M', '0.00', '2017-10-03', '2017-11-02', 30),
(50, 3, 216, 3, '543.50', '441.87', '22/KOM/09/17', '2017-09-29', '2017-10-04 13:02:23', '', '101.63', '2017-09-29', '2017-10-18', 14),
(51, 1, 4, 3, '625.00', '625.00', '3/KOM/10/17', '2017-10-09', '2017-10-09 16:38:27', 'D 2', '0.00', '2017-10-09', '2017-11-08', 30),
(52, 3, 441, 3, '7700.00', '7700.00', '4/KOM/10/17', '2017-10-10', '2017-10-10 17:40:51', 'B17', '0.00', '2017-10-06', '2017-10-24', 14),
(53, 2, 189, 3, '553.50', '450.00', '5/KOM/10/17', '2017-10-11', '2017-10-11 17:42:50', 'BiCHER', '103.50', '2017-10-11', '2017-11-10', 30),
(54, 3, 3, 3, '3286.06', '2671.60', '6/KOM/10/17', '2017-10-17', '2017-10-17 10:50:16', 'Int', '614.46', '2017-10-17', '2017-11-16', 30),
(55, 3, 3, 3, '5388.50', '4380.90', '7/KOM/10/17', '2017-10-17', '2017-10-17 10:51:13', 'Inska', '1007.60', '2017-10-17', '2017-11-16', 30),
(56, 3, 3, 3, '1205.40', '980.00', '8/KOM/10/17', '2017-10-17', '2017-10-17 10:52:20', 'Insa', '225.40', '2017-10-17', '2017-11-16', 30),
(57, 3, 3, 3, '1033.20', '840.00', '9/KOM/10/17', '2017-10-17', '2017-10-17 10:53:10', 'Inka', '193.20', '2017-10-17', '2017-11-16', 30),
(58, 2, 437, 3, '64.80', '60.00', '10/KOM/10/17', '2017-10-17', '2017-10-17 13:29:00', 'Nume9', '4.80', '2017-10-17', '2017-11-16', 30),
(59, 2, 406, 3, '193.21', '178.90', '11/KOM/10/17', '2017-10-17', '2017-10-17 14:23:34', '', '14.31', '2017-10-17', '2017-11-16', 30),
(60, 3, 441, 3, '3575.00', '3575.00', '12/KOM/10/17', '2017-10-23', '2017-10-23 10:43:36', 'Wyk10.', '0.00', '2017-10-23', '2017-11-06', 14),
(61, 3, 4, 3, '2000.00', '2000.00', '13/KOM/10/17', '2017-10-24', '2017-10-24 15:32:23', 'Dot./16', '0.00', '2017-10-24', '2017-11-23', 30),
(62, 3, 147, 3, '8000.00', '8000.00', '14/KOM/10/17', '2017-10-24', '2017-10-24 18:29:19', 'Dota 16', '0.00', '2017-10-24', '2017-10-31', 7),
(63, 1, 189, 3, '1395.00', '1395.00', '15/KOM/10/17', '2017-10-25', '2017-10-25 13:43:50', 'Pio&M', '0.00', '2017-10-25', '2017-11-24', 30),
(64, 4, 590, 3, '775.43', '775.43', '18/KOM/08/17', '2017-08-31', '2017-10-27 12:55:31', '', '0.00', '2017-08-31', '2017-11-10', 14),
(65, 4, 590, 3, '775.43', '775.43', '23/KOM/09/17', '2017-09-29', '2017-10-27 12:56:40', '', '0.00', '2017-09-29', '2017-11-10', 14),
(66, 1, 442, 3, '282.93', '232.46', '16/KOM/10/17', '2017-10-31', '2017-10-31 12:27:55', 'Uw 6', '50.47', '2017-10-31', '2017-11-30', 30),
(68, 1, 189, 3, '399.75', '325.00', '17/KOM/10/17', '2017-10-31', '2017-10-31 12:37:26', 'Wrora', '74.75', '2017-10-31', '2017-11-30', 30),
(69, 1, 413, 3, '469.62', '434.84', '18/KOM/10/17', '2017-10-31', '2017-10-31 12:41:07', 'KsaC', '34.78', '2017-10-31', '2017-11-14', 14),
(70, 2, 111, 3, '368.87', '299.90', '19/KOM/10/17', '2017-10-31', '2017-10-31 12:46:50', '', '68.97', '2017-10-31', '2017-11-30', 30),
(71, 3, 441, 3, '2178.00', '2178.00', '1/KOM/11/17', '2017-11-02', '2017-11-02 15:37:04', '16.10-20.10.2017', '0.00', '2017-11-02', '2017-11-16', 14),
(72, 3, 441, 3, '8800.00', '8800.00', '2/KOM/11/17', '2017-11-02', '2017-11-02 15:40:11', '23.10 - 27.11.2017', '0.00', '2017-11-02', '2017-11-16', 14),
(73, 1, 413, 3, '218.19', '202.03', '20/KOM/10/17', '2017-10-31', '2017-11-06 15:55:48', 'Ksa10', '16.16', '2017-10-31', '2017-11-20', 14),
(74, 1, 132, 3, '870.47', '707.70', '21/KOM/10/17', '2017-10-31', '2017-11-06 16:22:05', '', '162.77', '2017-10-31', '2017-11-14', 14),
(75, 1, 608, 3, '413.28', '336.00', '22/KOM/10/17', '2017-10-31', '2017-11-07 13:11:58', 'Łód', '77.28', '2017-10-31', '2017-11-14', 14),
(77, 1, 146, 3, '49.20', '40.00', '23/KOM/10/17', '2017-10-31', '2017-11-07 13:26:42', 'Łź 4', '9.20', '2017-10-31', '2017-11-21', 14),
(78, 1, 590, 3, '775.43', '775.43', '24/KOM/10/17', '2017-10-31', '2017-11-07 13:31:24', '', '0.00', '2017-10-31', '2017-11-14', 14),
(79, 2, 111, 3, '2460.00', '2000.00', '25/KOM/10/17', '2017-10-31', '2017-11-07 13:39:09', '', '460.00', '2017-10-31', '2017-11-21', 14),
(81, 2, 323, 3, '3095.24', '2865.97', '26/KOM/10/17', '2017-10-31', '2017-11-07 13:41:32', '', '229.27', '2017-10-31', '2017-11-21', 14),
(82, 2, 323, 3, '1103.98', '1022.22', '27/KOM/10/17', '2017-10-31', '2017-11-07 13:54:51', '', '81.76', '2017-10-31', '2017-11-21', 14),
(83, 2, 216, 3, '2827.40', '2298.70', '28/KOM/10/17', '2017-11-07', '2017-11-07 14:16:09', '', '528.70', '2017-10-31', '2017-11-21', 14),
(84, 2, 216, 3, '4737.11', '3851.31', '29/KOM/10/17', '2017-11-07', '2017-11-07 14:18:23', '', '885.80', '2017-10-31', '2017-11-21', 14),
(85, 2, 216, 3, '1966.05', '1598.42', '30/KOM/10/17', '2017-11-07', '2017-11-07 14:36:30', '', '367.63', '2017-10-31', '2017-11-21', 14),
(86, 2, 189, 3, '615.00', '500.00', '3/KOM/11/17', '2017-11-08', '2017-11-08 10:56:24', 'Aer', '115.00', '2017-11-08', '2017-12-08', 30);
My Query attempt, I am not sure if this is the proper attempt, the result seems to be quite off if you run each query independent.
SELECT
sum(przychody.netto) as last_month,
sum(Query1.netto) AS this_month
FROM
przychody,
(SELECT
*
FROM
przychody
WHERE
przychody.sprzedano >= Last_Day(CURRENT_DATE()) + INTERVAL 1 DAY - INTERVAL 2 MONTH AND
przychody.sprzedano < Last_Day(CURRENT_DATE()) + INTERVAL 1 DAY - INTERVAL 1 MONTH
) Query1
WHERE
przychody.sprzedano >= Last_Day(CURRENT_DATE()) + INTERVAL 1 DAY - INTERVAL 1 MONTH
SQLFiddle playground : http://sqlfiddle.com/#!9/e18a49/1
Preferred output
Last_month | Current_month
SUM() | SUM()
You can get the sum of current and previous month in single query by using case in sum()
SELECT
SUM( CASE WHEN sprzedano >= DATE_FORMAT(NOW() ,'%Y-%m-01') AND sprzedano <= LAST_DAY(NOW()) THEN netto ELSE 0 END) this_month,
SUM( CASE WHEN sprzedano >= DATE_FORMAT(NOW() - INTERVAL 1 MONTH ,'%Y-%m-01') AND sprzedano <= LAST_DAY(NOW() - INTERVAL 1 MONTH) THEN netto ELSE 0 END) last_month
FROM `przychody`
Demo

Conversion failed when converting date and / or time when creating a table

So I'am making a table with a char, varchar and date. I got an error message saying "Conversion failed when converting date and / or time". If anyone can help me fix, this you got sincere thank you. :D
this is my code for creating and inserting data on my table:
Create table Employee
(EMP_NUM char(3),
EMP_LNAME varchar(15),
EMP_FNAME varchar(15),
EMP_INITIAL char(1),
EMP_HIREDATE date,
JOB_CODE char (3),
EMP_YEARS char(2))
Insert into Employee (EMP_NUM, EMP_LNAME, EMP_FNAME, EMP_INITIAL,
EMP_HIREDATE, JOB_CODE)
Values (101, 'News', 'John', 'G','08-Nov-00', 502),
(102, 'Senior', 'David', 'H','12-Jul-89', 501),
(103, 'Arbough', 'June', 'E','01-Dec-96', 500),
(104, 'Ramoras', 'Anne', 'K','15-Nov-87', 501),
(105, 'Johnson', 'Alice', 'k','01-Feb-93', 502),
(106, 'Smithfield', 'William', null, '22-Jun-04', 500),
(107, 'Alonzo', 'Maria', 'D','10-Oct-93', 500),
(108, 'Washington', 'Ralph', 'B', '22-Aug-91',501),
(109, 'Smith', 'Larry', 'W', '18-Jul-97', 501),
(110, 'Olenko', 'Gerald', 'A', '11-Dec-95', 505),
(111, 'Wabash', 'Geoff', 'B', '04-Apr-91', 506),
(112, 'Smithson', 'Darlene', 'M', '23-Oct-94', 507),
(113, 'Joenbrood', 'Delbert', 'K', '15-ov-96', 508),
(114, 'Jones', 'Annelise', null, '20-Aug-93', 508),
(115, 'Bawangi', 'Travis', 'B','25-Jan-92', 501),
(116, 'Pratt', 'Gerald', 'L','05-Mar-97', 510),
(117, 'Williamson', 'Angie', 'M', '19-Jun-96', 509),
(118, 'Frommer', 'james', 'J','04-Jan-05', 510);
and this is the complete message result :
Msg 241, Level 16, State 1, Line 11
Conversion failed when converting date and/or time from character string.
Use CONVERSION for EMP_HIREDATE column for date :
For ex :
SELECT CAST('18-Jul-97' AS DATE)
In your query :
Insert into Employee (EMP_NUM, EMP_LNAME, EMP_FNAME, EMP_INITIAL,
EMP_HIREDATE, JOB_CODE)
SELECT 101, 'News', 'John', 'G',CAST('08-Nov-00' AS DATE), 502

What is the best method to compare data using the join?

I'm doing a section on my website called "My Ideal Surfboard", the user enters their data (weight, height, etc.), a comparison is made in database (join) and is returned to the ideal types of surfboard user according to his profile.
I have a table with the reference of all sizes and types of surfboard according to height, weight and user experience.
I'm doing the following:
Divided into two tables:
  - Table USER obviously stores user data (experience, height and weight);
  - Table SURFBOARD possesses the reference values (type, size, weight and litres) on each surfboardd according to experience, weight, height of the user.
-> I liken the table USER to the table SURFBOARD and return to the user the ideal model. How to do this?
At first I thought putting the same fields both in the table USER as in the table SURFBOARD make a inner join and have the data you want.
However, the both tables would duplicate values.
`dados_usuario` `prancha`
height2 weight2
height2 weight2
height2 weight2
height1 weight2
height1 weight2
height1 weight2
I compare and I display...
I believe that this is not a good practice and not the best way to do this. I know there are other methods to do this.
The issue is, how best way to compare these data?
How to identify which line is compatible with the data that the user will enter?
MY DATABASE:
CREATE TABLE USER(
usuario INT NOT NULL AUTO_INCREMENT,
nome VARCHAR(150) not null,
email VARCHAR(50) not null,
estilo VARCHAR(14) not null,
exp VARCHAR(13) not null,
altura VARCHAR(12) not null,
peso VARCHAR(9) not null,
PRIMARY KEY(usuario)
);
CREATE TABLE SUFBOARD(
prancha_pri INT NOT NULL AUTO_INCREMENT,
tipo_prancha VARCHAR(9) NOT NULL,
tamanho_prancha VARCHAR(9) not null,
meio_prancha VARCHAR(12) not null,
litragem_prancha VARCHAR(8) not null,
PRIMARY KEY (prancha_pri)
);
INSERTING DATA IN TABLE 'USER':
INSERT INTO EXPERIENCIA VALUES (NULL, 'joao', 'a#a.com', 'Surf', 'INICIANTE', '<1,60m', '>90kg');
INSERT INTO EXPERIENCIA VALUES (NULL, 'john', 'b#b.com', 'StandUP Paddle', 'INTERMEDIARIO', '1,81 - 1,90m', '81 - 90kg');
INSERT INTO EXPERIENCIA VALUES (NULL, 'carl', 'c#c.com', 'Surf', 'AVANÇADO', '>1,90m', '71 - 80kg');
INSERTING DATA IN TABLE SURFBOARD:
INSERT INTO PRANCHA VALUES (1, 'FUN', '8', '21 polegadas', '43L');
INSERT INTO PRANCHA VALUES (2, 'FUN', '8.8', '21 polegadas', '43L');
INSERT INTO PRANCHA VALUES (3, 'LONGBOARD', '9.2', '21 polegadas', '55L');
INSERT INTO PRANCHA VALUES (4, 'PRANCHA', '5.5 a 5.8', '20 polegadas', '30L');
INSERT INTO PRANCHA VALUES (5, 'PRANCHA', '5.5 a 5.10', '20 polegadas', '30L');
INSERT INTO PRANCHA VALUES (6, 'PRANCHA', '5.9 a 6.0', '21 polegadas', '32L');
INSERT INTO PRANCHA VALUES (7, 'PRANCHA', '6.0 a 6.4', '21 polegadas', '34L');
INSERT INTO PRANCHA VALUES (8, 'PRANCHA', '5.10 a 6.4', '20 polegadas', '30L');
INSERT INTO PRANCHA VALUES (9, 'PRANCHA', '5.10 a 6.4', '20 polegadas', '32L');
INSERT INTO PRANCHA VALUES (10, 'PRANCHA', '6.2 a 6.6', '21 polegadas', '32L');
INSERT INTO PRANCHA VALUES (11, 'PRANCHA', '6.4 a 6.8', '21 polegadas', '34L');
INSERT INTO PRANCHA VALUES (12, 'PRANCHA', '6.2 a 6.6', '20 polegadas', '30L');
INSERT INTO PRANCHA VALUES (13, 'PRANCHA', '6.2 a 6.6', '21 polegadas', '30L');
INSERT INTO PRANCHA VALUES (14, 'PRANCHA', '6.2 a 6.6', '21 polegadas', '34L');
INSERT INTO PRANCHA VALUES (15, 'PRANCHA', '6.2 a 6.6', '21 polegadas', '36L');
INSERT INTO PRANCHA VALUES (16, 'PRANCHA', '6.2 a 6.6', '21 polegadas', '38L');
INSERT INTO PRANCHA VALUES (17, 'PRANCHA', '6.2 a 7.0', '21 polegadas', '34L');
INSERT INTO PRANCHA VALUES (18, 'PRANCHA', '6.2 a 7.0', '21 polegadas', '38L');
INSERT INTO PRANCHA VALUES (19, 'PRANCHA', '5.5 a 5.8', '18 polegadas', '23L');
INSERT INTO PRANCHA VALUES (20, 'PRANCHA', '5.8 a 5.10', '18 polegadas', '24L');
INSERT INTO PRANCHA VALUES (21, 'PRANCHA', '5.10', '18 polegadas', '27L');
INSERT INTO PRANCHA VALUES (22, 'PRANCHA', '6.0 a 6.2', '19 polegadas', '28L');
INSERT INTO PRANCHA VALUES (23, 'PRANCHA', '6.0 a 6.2', '19 polegadas', '29 a 31L');
INSERT INTO PRANCHA VALUES (24, 'PRANCHA', '5.10 a 6.0', '19 polegadas', '24L');
INSERT INTO PRANCHA VALUES (25, 'PRANCHA', '5.10', '19 polegadas', '26L');
INSERT INTO PRANCHA VALUES (26, 'PRANCHA', '6.0', '19 polegadas', '27L');
INSERT INTO PRANCHA VALUES (27, 'PRANCHA', '6.0', '19 polegadas', '29L');
INSERT INTO PRANCHA VALUES (28, 'PRANCHA', '6.2', '20 polegadas', '30 a 31L');
INSERT INTO PRANCHA VALUES (29, 'PRANCHA', '6.0', '19 polegadas', '25L');
INSERT INTO PRANCHA VALUES (30, 'PRANCHA', '6.0', '19 polegadas', '28L');
INSERT INTO PRANCHA VALUES (31, 'PRANCHA', '6.0', '19 polegadas', '30L');
INSERT INTO PRANCHA VALUES (32, 'PRANCHA', '6.0 a 6.2', '20 polegadas', '30 a 31L');
INSERT INTO PRANCHA VALUES (33, 'PRANCHA', '5.11', '19 polegadas', '26L');
INSERT INTO PRANCHA VALUES (34, 'PRANCHA', '5.11', '19 polegadas', '28L');
INSERT INTO PRANCHA VALUES (35, 'PRANCHA', '6.0', '20 polegadas', '29L');
INSERT INTO PRANCHA VALUES (36, 'PRANCHA', '6.1', '20 polegadas', '30L');
INSERT INTO PRANCHA VALUES (37, 'PRANCHA', '6.1 a 6.6', '20 polegadas', '30 a 31L');
INSERT INTO PRANCHA VALUES (38, 'PRANCHA', '6.1', '19 polegadas', '27L');
INSERT INTO PRANCHA VALUES (39, 'PRANCHA', '6.1', '19 polegadas', '28L');
INSERT INTO PRANCHA VALUES (40, 'PRANCHA', '6.1 a 6.3', '20 polegadas', '29L');
INSERT INTO PRANCHA VALUES (41, 'PRANCHA', '6.1 a 6.4', '20 polegadas', '31L');
INSERT INTO PRANCHA VALUES (42, 'PRANCHA', '6.2 a 6.6', '20 polegadas', '31L');
In my form, visually they are all of height and weight exact values. However, the value of the fields are the values of my reference table:
HEIGHT:
<option value="1,71 - 1,80m">1.71m</option>
<option value="1,71 - 1,80m">1.72m</option>
<option value="1,71 - 1,80m">1.73m</option>
<option value="1,71 - 1,80m">1.74m</option>
<option value="1,71 - 1,80m">1.75m</option>
<option value="1,71 - 1,80m">1.76m</option>
WEIGHT:
<option value="81 - 90kg">88Kg</option>
<option value="81 - 90kg">89Kg</option>
<option value="81 - 90kg">90Kg</option>
<option value=">90kg">91Kg</option>
<option value=">90kg">92Kg</option>
<option value=">90kg">93Kg</option>
<option value=">90kg">94Kg</option>
First you have to tell us, given a user and some boards, what makes the "ideal" board(s). Then that gets translated to SQL.
PS
You are going to have to remove units from your values if you want your queries to talk about those values using comparisons or arithmetic. (And then you can store numbers using numerical types.) You are going to have to store ranges as pairs of columns if you want to mention their endpoints easily. Otherwise you will have to write things like
U.HEIGHT >= get_min_from_range_as_number(B.HEIGHT)
U.WEIGHT <= get_weight_as_number_without_units(B.WEIGHT)
where the functions are complex.
Say you want rows where:
user USERID has ideal board(s) BOARDID
You have to tell us what that means in terms of simpler things. Eg perhaps it means that:
(read U.ID as USERID, B.ID as BOARDID)
there exist values for U.NAME, U.HEIGHT, ..., B.WEIGHT where
user U.ID with name U.NAME ... has height U.HEIGHT...
AND board B.ID suits height between B.MINHEIGHT and B.MAXHEIGHT ...
AND U.HEIGHT >= B.MINHEIGHT AND U.HEIGHT <= B.MAXHEIGHT
AND (B.MINHEIGHT + B.MAXHEIGHT)/2 <= U.WEIGHT * 100
AND ...
OR ...
Now we need a query that returns the rows that make that statement template into a true statement.
Already User holds rows where:
user ID with name NAME ... has height HEIGHT ...
And Board holds rows where:
board ID suits height between MINHEIGHT and MAXHEIGHT ...
But the nature of SQL JOIN is that table1 t1JOINtable2 t2 holds the rows that satisfy the first table's statement template ANDed to the second's with parameters/columns prefixed by aliases and dots. So User U JOIN Board B holds rows where:
user U.ID with name U.NAME ... has height U.HEIGHT ...
AND board B.ID suits height between B.MINHEIGHT and B.MAXHEIGHT ...
And the nature of WHERE is that tableWHEREcondition holds the rows that satify table's statement template ANDed with condition. So
User U JOIN Board B
WHERE U.HEIGHT >= B.MINHEIGHT AND U.HEIGHT <= B.MAXHEIGHT
...
holds rows where:
user U.ID with name U.NAME ... has height U.HEIGHT ...
AND board B.ID suits height between B.MINHEIGHT and B.MAXHEIGHT ...
AND U.HEIGHT >= B.MINHEIGHT AND U.HEIGHT <= B.MAXHEIGHT
...
Then SELECT drops any unwanted parameters/columns. So user U.ID has ideal oard(s) B.ID holds rows where:
SELECT U.ID, B.ID
FROM User U JOIN Board B
WHERE U.HEIGHT >= B.MINHEIGHT AND U.HEIGHT <= B.MAXHEIGHT
...
SELECT also renames columns. So to get our overall query of the rows where user USERID has ideal board(s) BOARDID we need:
SELECT U.ID AS USERID, B.ID AS BOARDID
FROM User U JOIN Board B
WHERE U.HEIGHT >= B.MINHEIGHT AND U.HEIGHT <= B.MAXHEIGHT
...