I have created an AWS Python Lambda which simulates data and sends it as messages to a relevant AWS IoT topic.
Instead of reading the Client_ID from the os.environ in the lambda I am wanting to pull them from the JSON file that I have stored in S3 using boto3
You should consider using "S3 Select" which allows you to query a file in S3 directly without having to download the file to the system. In boto3 it's called select_object_content. I built out a sample below from your info and the boto3 page.
response = client.select_object_content(
Bucket='mybucketname',
Key='simulated/config/IoT-sim-config.json',
Expression="SELECT s.* FROM S3Object s WHERE s.client_id = 'Sim_1'", # You will need to fiddle with the quotes on the SQL here.
ExpressionType='SQL',
InputSerialization={
'JSON': {
'Type': 'DOCUMENT'
}
},
OutputSerialization={
'JSON': {
'RecordDelimiter': ','
}
}
)
Related
I'm working with a raspberry pi zero and Python to send and recieve sensor data with Azure IoT. I've already created an endpoint and message routing to the storage container. But when I check the JSON-Files in the container, I've got two problems:
The file include various general data which i don't need
My message body is in Base24-format
My message looks like this:
{"EnqueuedTimeUtc":"2021-06-25T13:03:25.7110000Z","Properties":{},"SystemProperties":{"connectionDeviceId":"RaspberryPi","connectionAuthMethod":"{"scope":"device","type":"sas","issuer":"iothub","acceptingIpFilterRule":null}","connectionDeviceGenerationId":"637555519600003402","enqueuedTime":"2021-06-25T13:03:25.7110000Z"},"Body":"eyJ0ZW1wZXJhdHVyZSI6IDI4Ljk1LCAicHJlc3N1cmUiOiA5ODEuMDg2Njk1NDU5MzMyNiwgImh1bWlkaXR5IjogNDYuMjE0ODE3NjkyOTEyODgsICJ0aW1lIjogIjIwMjEtMDYtMjUgMTQ6MDM6MjUuNjMxNzk1In0="}
The body included my sensor data in Base64-format. I've already read about contentType = application/JSON and contentEncoding = UTF-8 so that Azure can work with correct JSON files. But where do i apply these settings? When I apply it to the routing query, I get the following error:
Routing Query Error (The server didn't understand your query. Check your query syntax and try again)
I just want to get the body-message in correct JSON Format.
Thank you all for any kind of help! Since it's my first experience with this kind of stuff, I'm a little helpless.
Zero clue if this helps, but here is my code for sending data from Raspberry Pi Python to AWS - Parse Server using base64/JSON. The only reason I use base64 is to send pictures. You should only have to use JSON to send your other data.
import requests
import random, time
import math
import json
import Adafruit_DHT
import base64
from Adafruit_CCS811 import Adafruit_CCS811
from picamera import PiCamera
from time import sleep
DHT_SENSOR = Adafruit_DHT.DHT22
DHT_PIN =4
ccs = Adafruit_CCS811()
camera = PiCamera()
while True:
time.sleep(5)
camera.start_preview()
sleep(5)
camera.capture('/home/pi/Desktop/image.jpg')
camera.stop_preview()
with open('/home/pi/Desktop/image.jpg', 'rb') as binary_file:
binary_file_data = binary_file.read()
base64_encoded_data = base64.b64encode(binary_file_data)
base64_message = base64_encoded_data.decode('utf-8')
humidity, temperature = Adafruit_DHT.read_retry(DHT_SENSOR, DHT_PIN)
ccs.readData()
parseServer = {
"temp": temperature,
"humid": humidity,
"co2": ccs.geteCO2(),
"pic": base64_message
}
resultJSON = json.dumps(parseServer)
headers = {
'X-Parse-Application-Id': 'myappID',
'Content-Type': 'application/json',
}
data = resultJSON
response =
requests.put('http://1.11.111.1111/parse/classes/Gamefuck/TIuRnws3Ag',
headers=headers, data=data)
print(data)
If you're using the Python SDK for Azure IoT, sending the message as UTF-8 encoded JSON is as easy as setting two properties on your message object. There is a good example here
msg.content_encoding = "utf-8"
msg.content_type = "application/json"
Furthermore, you don't need to change anything in IoT Hub for this. This message setting is a prerequisite to be able to do message routing based on the body of the message.
I want to receive one json file containing all the translation of all language and store it on the local storage and later from local storage retrieve those translation at run time.
as of now I have achieved translation by maintaining files on the backend only in the public folder.But now I want to get that json from backend.As of now I have no idea if you have any let me know
here is i18.js file
import i18next from 'i18next'
import { initReactI18next } from 'react-i18next'
import HttpApi from 'i18next-http-backend'
import LanguageDetector from 'i18next-browser-languagedetector';
i18next
.use(HttpApi)
.use(LanguageDetector)
.use(initReactI18next)
.init({
supportedLngs: ['en','hi', 'cn'],
fallbackLng: 'en',
debug: false,
detection: {
order: [ 'cookie', ],
caches: ['cookie'],
},
backend: {
loadPath:'/locales/{{lng}}/translation.json',
},
})
and below is the JSON file that I will receive from backend when app loads.
{
`alllanguage:{
en:{
"welcome":"welcome"
...
},
"hi":{
"welcome":"स्वागत हे"
},
"cn":{
"welcome":"欢迎"
}
}
}
help me out how I can use i18 library for location and fetching one json from backend and using the same file later on
why are fetching file from backend just store in the front end
JSON file contains below data and structure is same across all files, but data under profile will be changing quite frequently.
So, am trying for Python script which will copy and replace entire data under "Profile" node to another JSON files (As Config data is static as each machine has diff static values).
Algo:
Pharse JSON file
Read and extract Profile data
Pharse and replace Entire Profile data to another JSON file
{
"config":
[
{
"Name":"FW",
"MLC":"anotoly",
"Frame":"True",
"ImageUpload":"False"
}
],
"Profile":
[
{
"Title":"CLI",
"File":"",
"Profile":"H",
"App":"AI.exe",
"Location":"\\common\\Isolation",
"sensitivity":20
}
]
}
import json,os,sys
def testDataTransform():
source = 'C:/Users/WS/Downloads/Profile.json'
Destination = 'D:/OL/SV/4.json'
datastore= None
with open(source, 'r') as content_file:
content = content_file.read()
datastore = json.load(content)
This code is not working, any help would be great. Thanks.
I have an endpoint that serves a ML model and I want to perform load testing on it. I'm using Jmeter 4.0 and its UI to construct a simple plan test. With 1 thread group that loops for a given duration and continuosly performs https requests.
How do I parse multiple test examples into the payload of a http request, one by one and in json format. These examples are contained in a json file called samples.json. The nested structure is the following:
{ "dataset": [
{"id": 1,
"in":[
{
"Feature1": 8.9
"Feature2":7.1
}],
"out": "Class1",
},
{"id": 2,
"in":[
{
"Feature1": 3.2
"Feature2":5.1
}],
"out": "Class1",
}]
}
IMPORTANT: I do not know the number of attributes a priori, so I need to retrieve them from the in key as that may change for other types of models, therefore I can't make use of harcoded jmeter variables, similar to what it's used in the CSV Config Set add-on, where they need to specify the variables names for each column of the csv file
I have no idea how you're gonna use the values from JSON in the HTTP Request sampler, however this is how you can parse your samples.json file and get in values from it in the JSR223 Sampler
new groovy.json.JsonSlurper().parse(new File('samples.json')).dataset.each { entry ->
entry.get('in').each { feature ->
feature.each { value ->
log.info(value.key + '=' + value.value)
}
}
}
The above code basically prints the keys and respective values into jmeter.log file
But you can easily amend it to store the values into JMeter Variables, write them into a CSV file, set HTTP Request sampler to use them on the fly, etc.
More information:
Groovy: Parsing and producing JSON
Apache Groovy - Why and How You Should Use It
Apache JMeter API
I was successful in publishing (POST) a JSON file in Zapier and creating a Storage for it. However, I´d like to access the JSON in Zapier Storage using a Python code run locally. I am able to access the storage with Python3, see that is something written there, but I cannot access the JSON contents.
import urllib
import json
import codecs
reader = codecs.getreader("utf-8")
access_token = "password"
def GetStorage(page_id, access_token):
url = 'https://hooks.zapier.com/url/'
response = urllib.request.urlopen(url)
data = json.load(reader(response))
return data
a=GetStorage(url, access_token)
print(a)
All I get is:
{'attempt': '5a539a49-65eb-44f8-a30e-e171faf7a680',
'id': '1b38d21a-0150-46df-98c1-490a0d04b565',
'request_id': '5a539a49-65eb-44f8-a30e-e171faf7a680',
'status': 'success'}
When in fact I need:
{'Name':'value',
'Address': 'value'
}
Any ideas ?
David here, from the Zapier Platform team.
You're close! hooks.zapier.com is the url we use for incoming webhooks, so we always reply with a 200 and the response body you're seeing.
Instead, use store.zapier.com. You'll also want to make sure to include your secret. A full request URL will look like:
https://store.zapier.com/api/records?secret=test
which will return arbitrary json data:
{
"name": "david",
"job": "programmer"
}
The full docs are in json here: https://store.zapier.com/