I have two applications running on openshift. Currently, it is for test purposes only, but the intention is to run those apps on openshift for real later on.
One thing that surprises me is that the data I enter gets deleted moreless regularly.
That is, when I return to the URL some days later, some tables are empty.
there are currently three developers and none of us did delete the data on purpose...
Does it have to do with our price plan? Is there any other explanation?
Any hints will be appreciated
Have you looked at the log files?
OpenShift does not do anything that would truncate or touch your database. The only other explanation would be that you are out of disk space but then you would get an error message that you are out of space.
If you can provide us with more detail then we might give you a better answer. What database gear are you using? Do you get any log messages? Does your data get successfully inserted into the table in the first place?
Openshift does not just go into your database gear and delete them, but until you can tell us more, we can't give you better answers.
Related
I'd created a Cloud SQL server for development without failover at the start.
Several days ago, I tried to create failover replica for it, and the operation had been waiting for completion whole day without further notification.
I think I should try it again today, and I cannot create it. The system is always responding with:
Could not complete the operation.
I'd changed the name or any random name for instance ID. The result is still the same.
Does anyone have such kind experience? How to solve this?
Did you choose zone?
When you create a failover, the "location" field will be auto-filled in the edit page. But the "zone" field is empty, you need to select one from the drop-down list.
Just encountered the same problem as you. And it works for me.
Currently have a ~2GB CRM database that's built on mysql + cold fusion and running on our local MS2012 server. Looking to move it to a more usable/up-to-date solution that would allow flexibility, security, and back-up solutions. Also no longer going to be running on cold fusion.
I received the full backup database in a .bak and have restored successfully in Microsoft SQL Server Management Studio so I see the massive list of tables, views, programmability, service broker, storage, security.
Salesforce seems to be a good bet, as we would likely be able to hire someone in the event that I leave so someone could pick it back up and work with it. Also, Salesforce makes sense in what we're trying to do with the CRM.
I'm unsure about how to do this migration. Right now I'm working on a backup copy to practice and put a process in place to ensure we have a smooth transition because the company is still doing their day to day on CF until we have a set in stone stop date to do the transfer. It'll be a one-time transfer so I don't need to establish a constant connection, I just want to pull in all the database tables, values, relationships, etc and then get everyone setup. I realize pulling in users with login information might not be feasible and I would have to create users in salesforce. I do want to have the data that each user has put in retained though.
There might be some additional data you guys need to fully answer the question so please let me know if I have some crucial gaps that would help get the proper answer.
DBAmp is pretty much industry standard at this point. https://appexchange.salesforce.com/listingDetail?listingId=a0N300000016bWzEAI. It will allow you to do CRUD from SQL.
Also, when you sign an agreement with Salesforce, they will also hook you up with an integration partner to whom you will pay large sums of money to help you with the transition.
Edit: Sorry, I guess I didn't really answer your full question. Yes, you are missing large, LARGE pieces of information. DBAmp will help you with your data, but don't think you will just be able to import your data structure over.
So I'm going to attempt to create a basic monitoring tool in VB.net. Now I'd like some advice on how basically to tackle the logging and reporting side of things so I'd appreciate some responses from users who I'm sure have a better idea than me and can tell me far more efficient ways of doing things.
So my plan is to have a client tool, which will read from a MySQL database values and basically change every x interval, I'm thinking 10/15 minutes at the moment. This side of the application is quite easy, I mean I can get something to read a database every x amount of time and then change labels and display alerts based on them. - This is all well documented and I am probably okay with that.
The second part is to have a client that sits in the system tray of the server gathering the required information. Now the system tray part I think will probably be the trickiest bit of this, however that's not really part of my question.
So I assume I can use the normal information gathering commands and store them perhaps as strings and I can then connect to the same database and add them to the relevant fields. For example if I had a MySQL table called "server" and a column titled "Connection" I could check if the server has an internet connection for example and store the result as the value 1 for yes and 0 for no and then send a MySQL command to the table to update the "connection" value to either 0/1.
Then I assume the monitoring tool I can run a MySQL query to check the "Connection" column and if the value is = 0 change a label or flag an error and if 1 report that connectivity is okay?
My main questions about the above are listed below.
Is using a MySQL database the most efficient way of doing something like this?
Obviously if my database goes down there's no more reporting, I still think that's a con I'll have to live with though.
Storing everything as values within the code is the best way to store my data?
Is there anything particular type of format I should use in the MySQL colum, I was thinking maybe tinyint(9)?
Is the above method redundant and pointless?
I assume all these database connections could cause some unwanted server load, however the 15 minute refresh time should combat that.
Is there a way to properly combat delays with perhaps client updating not in time for the reporter so it picks up false data, perhaps a fail safe for a column containing last updated time?
You probably don't need the tool that gathers information per se. The web app (real time monitor) can do that, since the clients are storing their information in the same database. The web app can access the database every 15 minutes and display the data, without the intermediate step of saving it again. This will provide the web app with the latest information instead of a potential 29-minute delay.
In other words, the clients are saving the connection information once. Don't duplicate it in the database.
MySQL should work just about as well as anything.
It's a bad idea to hard code "everything". You can use application settings or a MySQL table if you need to store IPs, etc.
In an application like this, the conversion will more than offset the data savings of a tinyint. I would use the most convenient data type.
I'm having issues and I don't know where to turn. Long story short, my web designer left me high and dry and I have no idea what he did and he refuses to answer his phone. I have access to the main page but after that, I'm completely locked out and staring at a SearchPhaseExecutionException for every single product in my store. Any help would be much appreciated as I am completely clueless on what to do. Here is the full error log and I can post any additional information as is necessary to troubleshoot this problem:
SearchPhaseExecutionException at /category/1
Failed to execute phase [query], total failure; shardFailures {[_na_][product][0]: No active shards}{[_na_][product][1]: No active shards}{[_na_][product][2]: No active shards}{[_na_][product][3]: No active shards}{[_na_][product][4]: No active shards}
Somewhere on your web site/farm you have an elasticsearch server running. This server has an index called product, and I would guess this index contains information about products in your store. Currently, this elasticsearch server is experiencing some sort of an issue that made the index unavailable. It might be possible to tell you what is going on by looking at the log file of the elasticsearch server, which is different from the log file of your web server. Do you see any log files called elasticsearch.log?
By the way, since it might take several iterations to figure out what's going on, it might be easier to move this conversation to elasticsearch mailing list or #elasticsearch IRC channel on freenode.
some times this error happened because of the data, data to be searched has to be cleaned as elasticSearch will crash with some words like " [PREPARATION " or even " word: " as punctuations drive it crazy.
if you don't want to clean the data you can just catch the exception and it will continue
Is it possible to restore table to last time with data if all data was deleted accidentally.
There is another solution, if you have binary logs active on your server you can use mysqlbinlog
generate a sql file with it
mysqlbinlog binary_log_file > query_log.sql
then search for your missing rows.
If you don't have it active, no other solution. Make backups next time.
Sort of. Using phpMyAdmin I just deleted one row too many. But I caught it before I proceeded and had most of the data from the delete confirmation message. I was able to rebuild the record. But the confirmation message truncated some of a text comment.
Someone more knowledgeable than I regarding phpMyAdmin may know of a setting so that you can get a more complete echo of the delete confirmation message. With a complete delete message available, if you slow down and catch your error, you can restore the whole record.
(PS This app also sends an email of the submission that creates the record. If the client has a copy, I will be able to restore the record completely)
As Mitch mentioned, backing data up is the best method.
However, it maybe possible to extract the lost data partially depending on the situation or DB server used. For most part, you are out of luck if you don't have any backup.
I'm sorry, bu it's not posible, unless you made a backup file earlier.
EDIT: Actually it is possible, but it gets very tricky and you shouldn't think about it if data wasn't really, really important. You see: when data get's deleted from a computer it still remains in the same place on the disk, only its sectors are marked as empty. So data remains intact, except if it gets overwritten by new data. There are several programs designed for this purpose and there are companies who specialize in data recovery, though they are rather expensive.
For InnoDB tables, Percona has a recovery tool which may help. It is far from fail-safe or perfect, and how fast you stopped your MySQL server after the accidental deletes has a major impact. If you're quick enough, changes are you can recover quite a bit of data, but recovering all data is nigh impossible.
Of cours, proper daily backups, binlogs, and possibly a replication slave (which won't help for accidental deletes but does help in case of hardware failure) are the way to go, but this tool could enable you to save as much data as possible when you did not have those yet.
No this is not possible. The only solution will be to have regular backups. This is very important.
Unfortunately, no. If you were running the server in default config, go get your backups (you have backups, right?) - generally, a database doesn't keep previous versions of your data, or a revision of changes: only the current state.
(Alternately, if you have deleted the data through a custom frontend, it is quite possible that the frontend doesn't actually issue a DELETE: many tables have a is_deleted field or similar, and this is simply toggled by the frontend. Note that this is a "soft delete" implemented in the frontend app - the data is not actually deleted in such cases; if you actually issued a DELETE, TRUNCATE or a similar SQL command, this is not applicable.)
If you use MyISAM tables, then you can recover any data you deleted, just
open file: mysql/data/[your_db]/[your_table].MYD
with any text editor