I'm running the following command in Redshift:
myDB=> unload ('select * from (select * from myTable limit 2147483647);')
to 's3://myBucket/'
credentials 'aws_access_key_id=***;aws_secret_access_key=***';
Here is what I get back:
ERROR: S3ServiceException:The bucket you are attempting to access must be addressed
using the specified endpoint. Please send all future requests to this
endpoint.,Status 301,Error PermanentRedirect,Rid 85ACD9FFAFC5CE8F,
ExtRid vsz4/0NdOAYbaJ48WYCnrYBCvuuL0cBTdcEN
DETAIL:
-----------------------------------------------
error: S3ServiceException:The bucket you are attempting to access must be addressed
using the specified endpoint. Please send all future requests to this
endpoint.,Status 301,Error PermanentRedirect,Rid 85ACD9FFAFC5CE8F,
ExtRid vsz4/0NdOAYbaJ48WYCnrYBCvuuL0cBTdcEN
code: 8001
context: Listing bucket=myBucket prefix=
query: 0
location: s3_unloader.cpp:181
process: padbmaster [pid=19100]
-----------------------------------------------
Any thoughts? Or maybe ideas how to dump data from Redshift into MySQL or something similar?
The error message is returned when using path like syntax with a non US bucket. Create a new bucket in the same region as your redshift cluster and everything should work.
You are missing the prefix part of the filename. Try using s3://myBucket/myPrefix
Related
i have an on-premise Sentry instance. we need to export all the event data and be able to read it. i've been able to export the tables that we are interested in and most are readable but there is a nodestore_node table, with data stored as gzipped json blob-as-text, which i'm not familiar with. i've tried a bunch of different things but haven't been successful in converting to something human readable. a few things i've tried so far:
SQL uncompress (just ended up with null for each row of converted_data)
SELECT nodestore_node.data, CONVERT( UNCOMPRESS( nodestore_node.data ) USING 'utf8' ) AS converted_data FROM nodestore_node;
SQL cast (got error messages about CAST for this one)
SELECT CAST( CAST( 'string' as XML ).value('.','varbinary(max)') AS varchar(max) )
decompress gzip in JS
i also tried to see if there was an API to export all the data but couldn't find one - if anyone knows a way to pull the converted nodestore data through an on-prem API, i'm all ears.
any ideas would be greatly appreciated!!
there is an events API:
https://docs.sentry.io/api/events/list-a-projects-events/
on-premise auth tokens can be generated as noted in the documentation (https://docs.sentry.io/api/auth/), the base URL just needs to be swapped out:
{base_url}/settings/account/api/auth-tokens/
then once a token is generated the events can be called:
curl {base_url}/api/0/projects/{organization_slug}/{project_slug}/events/ \ -H 'Authorization: Bearer <auth_token>'
big thanks to BYK at Sentry for all the answers: https://forum.sentry.io/t/decompressing-gzipped-json-blob-as-text-nodestore-node-table-data-from-on-premise-instance-of-sentry/11897
I have some implementation that is best served by pickling a pandas dataframe and storing it in a DB.
This works fine if the database is sqlite but fails with a load error when it is MySQL
I have found other people with similar issues on stackoverflow and google but it seems that everybodys solution is to use sql to store the dataframe.
As a last resort I would go down that route but it would be a shame for this use case to do that.
Anybody got a solution to get the same behaviour from mysql as sqlite here?
I simply dump the dataframe with
pickledframe = pickle.dumps(frame)
and store pickledframe as a BinaryField
pickledframe = models.BinaryField(null=True)
I load it in with
unpickled = pickle.loads(pickledframe)
with sqlite it works fine, with mysql I get
Exception Type: UnpicklingError
Exception Value: invalid load key, ','.
upon trying to load it.
Thanks
I'm working with read/write through the cache via Apache Ignite and encountered the following problem:
In lastest versions of postgres there is special json type of data and jsonb add-on to work with json in database.
In Apache Ignite this functions don't implemented, how I know. Moreover, when Trying to do this part, I found it possible to read json from the database as PGobject, but there is no way to add jsonb processing to the built-in SQL query parser.
For example, I'm trying send next query:
SELECT jdata->>'tag1' FROM jsontest;
And get exception:
Syntax error in SQL statement "SELECT JDATA-[*]>>'tag1' FROM JSONTEST; "; SQL statement:
SELECT jdata->>'tag1' FROM jsontest; [42000-195]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:345)
at org.h2.message.DbException.get(DbException.java:179)
at org.h2.message.DbException.get(DbException.java:155)
at org.h2.message.DbException.getSyntaxError(DbException.java:191)
at org.h2.command.Parser.getSyntaxError(Parser.java:533)
at org.h2.command.Parser.getSpecialType(Parser.java:3842)
at org.h2.command.Parser.read(Parser.java:3352)
at org.h2.command.Parser.readIf(Parser.java:3259)
at org.h2.command.Parser.readSum(Parser.java:2375)
at org.h2.command.Parser.readConcat(Parser.java:2341)
at org.h2.command.Parser.readCondition(Parser.java:2172)
at org.h2.command.Parser.readAnd(Parser.java:2144)
at org.h2.command.Parser.readExpression(Parser.java:2136)
at org.h2.command.Parser.parseSelectSimpleSelectPart(Parser.java:2047)
at org.h2.command.Parser.parseSelectSimple(Parser.java:2079)
at org.h2.command.Parser.parseSelectSub(Parser.java:1934)
at org.h2.command.Parser.parseSelectUnion(Parser.java:1749)
at org.h2.command.Parser.parseSelect(Parser.java:1737)
at org.h2.command.Parser.parsePrepared(Parser.java:448)
at org.h2.command.Parser.parse(Parser.java:320)
at org.h2.command.Parser.parse(Parser.java:296)
at org.h2.command.Parser.prepareCommand(Parser.java:257)
at org.h2.engine.Session.prepareLocal(Session.java:573)
at org.h2.engine.Session.prepareCommand(Session.java:514)
at org.h2.jdbc.JdbcConnection.prepareCommand(JdbcConnection.java:1204)
at org.h2.jdbc.JdbcPreparedStatement.(JdbcPreparedStatement.java:73)
at org.h2.jdbc.JdbcConnection.prepareStatement(JdbcConnection.java:288)
at org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.prepareStatement(IgniteH2Indexing.java:402)
at org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.queryDistributedSqlFields(IgniteH2Indexing.java:1365)
... 9 more
And when I'm trying to research structure of Ignite or H2 databse engine, there is no way to add processing of such queries.
So, maybe someone has met this or a similar problem and can advise how to solve it?
When i am executing oracle reports i got above mentioned error.
I m using query with three formula columns and generating XML for RTF.
All formula columns compiled successfully. how to resolve this issue?
Error While Executing
Workaround of setting cacheSize to 50 in $INST_TOP/ora/10.1.2/reports/conf/rwbuilder.conf while waiting for this fix.
When cacheSize is 0 in server conf file, Cache.manage() removes output files in cache directory after a request is successfully finished."
But a non-zero value disables the cache clean up functionality.
Check more details Oracle Doc ID 1237834.1
I cannot run source table_name.sql
when I try, I get the following error:
ERROR:
Failed to open file 'cars.sql', error: 2
I have been following zeetcode:
http://zetcode.com/databases/mysqltutorial/introduction/#mysql
http://zetcode.com/databases/mysqltutorial/firststeps/
The first provides a list of commands to create a database called mydb and a set of tables to be used in the tutorial including one named Cars.
The second link shows you how to access the databases (SHOW DATABASES;) which I could do but when I go to the previously created database mydb, I can see the previously created tables including Cars (even though the tutorial says I should not see anything.
when I follow the next command: source cars.sql, I receive the error above
yet, this query works:
mysql> SELECT * FROM Cars;
any ideas as to why the source function would not work?
This is the first time I am working with mysql.
where is the file cars.sql located? you may need to specify the full path to the file in order for it to be located by mysql.