Is there a way to get a reliable machine time that cannot be altered by changing the time/date on a mobile device or computer?
I'm looking to find a way to see time elapsed in an air app where you can...
run the app
change the system time (back an hour, or change the date forward by two weeks, etc.)
run the app again with the correct time passed
any ideas how to implement this without relying on server connection?
I feel fairly confident saying it can't be done. Without a remote server to ping, all the app knows is what the OS tells it - and if the app isn't running, it can't be collecting its own data independently of the OS.
The closest you could get would be storing the system time at app shutdown, then comparing that to the system time on app startup to see if the OS' clock has been turned back - if the current time is before the previous time, you know something's up. Even then, the data about the last known system time has to be stored somewhere locally - which means the user is free to edit it if they want.
You could also mess with having the app running in the background constantly, or even as a service, though that moves out of the viability of using AIR - and even then, time would only pass while the system was on, and only if the user decides to let the background process continue running.
If your problem is measuring the elapsed time between two moments, getTimer should do the job.
Related
I am trying to build a way to log all the times my friends and I are on our video game server. I found an api which returns a JSON file of online players given a server’s IP address.
I plan to receive a new JSON file every 30 seconds and then log player gaming sessions by figuring out when they get on and when they are no longer off.
The problem is that this is my first time using a database like this for my websites. I want to use (and will use) Golang to retrieve the JSON file and update my MySQL database of player logs.
PROBLEM: I have no freaking clue how to have my Golang file run every 30 seconds to update my database. I can get a simple program to grab data and update a local database easily, but I’m lost on how to get this to run on my website and to have it run every 30ish seconds 24/7. I’m used to CRUD with simple html forms and other things related to user input, but I’ve never thought of changing my database separately from website interactions
Is there a standard solution to my problem? Am I thinking about this all wrong? Do I make sense? Does God really exist!!?
I’m using BlueHost
I insist on Golang for the experience
first time using stackoverflow idk if my question was too long
You need a UNIX/Linux subsystem known as cron to do this. cron lets you rig a program to run at a specific interval on your machine.
Bluehost’s cron support lets you run php scripts. I don’t believe they support running other kinds of programs.
If you have some other always-on machine that runs your golang program, you can run it there. But, to do that, you will have to configure Bluehost to allow an external connection to your MySQL server so that program can connect. Ask their customers support people about that.
Pro tip: every 30 seconds may be too high an update frequency to use on a shared service like Bluehost. That kind of frequency is tricky to manage even if you completely control the server.
I'm working with MS Reporting Services 2016. I noticed that the application domain is set by default to recycle every 12 hours. Now the impact on users after a recycle is either slow response from reporting services or a failed report. Both disappear after a refresh of the report, but this is not ideal.
I have come across a SO answer where people suggest that you can turn off the scheduled recycle by setting the configuration attribute RecycleTime to zero.
I have also read that writing a script to manually restart reporting services, which also recycles the app domain. Then a script that simply loads a report at a controlled time to remove the first time load issues. However this all seems like a work around to me and I would rather not have to do this.
My concern is that there must be a logical reason for having the scheduled recycle time, but I cannot find any information explaining this. Does anyone know if there is a negative impact from turning off the scheduled application domain recycle?
The RecycleTime is a function aimed at making sure SSRS isn't consuming RAM it doesn't need and potentially starving the rest of the machine. Disabling the refresh essentially removes the ability to claw back any memory used for a brief period of intensive processing.
If you are confident your machine is suitably resourced you can turn the refresh off or, if not, alternatively schedule the refresh for an out of hours time and define a Cache Refresh Plan to cache any super important reports immediately afterwards to minimise any user impact.
Further reading here: https://www.mssqltips.com/sqlservertip/2735/prevent-sql-server-reporting-services-slow-startup/
I guess I'm possibly over simplifying this, but SSRS was designed to recycle every 12 hours (default) for a reason. If it ain't broke, don't fix it. In my case, I wanted to control when the recycle occurred. I execute a 1 line powershell script from a SQL Agent job at 6:50 am, then generate a subscription report at 7 am, which kick starts SSRS and the users do not see any performance degradation.
restart-service 'ReportServer'
Leaving the SSRS config file setting at 720 minutes lets the recycle occur again at 6:50 pm. Subscription reports generate throughout the night, so if a human gets on SSRS after hours there should be no performance issue because the system is already running.
Are we possibly overthinking it?
I have few clumps of data that needs to be sync'd. The app is a calendar where in dates are stored, along with few other information. So on app exit I need to sync the all dates to the server. The dates and other info are converted to Json format and sent.
I have used HttpWebRequest for getting the responses from the server and hence are a series of callbacks. The function SyncHistory is called in on the Application_Closing
What happens is that the I can see the execution moving to the SyncHistory but once the app is closed, it does not further call the other functions.
I need the app to sync data before it stops? I have tried await keyword, sometimes it calls the functions but some other times it does not?
Where should the code ideally be put. I dont want to sync data everytime the user enters data. Is there any other common exit points which runs even after the app is closed?
This isn't a great idea - you only have a maximum of 10s to complete Application_Closing, before the phone OS will shutdown your app forcibly. Once your app is closed (or shutdown forcibly) none of your code will run.
The nature of a mobile phone networking and cellular networks is that you can't rely on having sent all your data to a server in 10s. You'll have to think of an alternative strategy if you want this to be reliable.
And you haven't even consider the Application_Deactivated scenario where you get even less time to complete.
This is not the typical question, but I'm out of ideas and don't know where else to go. If there are better places to ask this, just point me there in the comments. Thanks.
Situation
We have this web application that uses Zend Framework, so runs in PHP on an Apache web server. We use MySQL for data storage and memcached for object caching.
The application has a very unique usage and load pattern. It is a mobile web application where every full hour a cronjob looks through the database for users that have some information waiting or action to do and sends this information to a (external) notification server, that pushes these notifications to them. After the users get these notifications, the go to the app and use it, mostly for a very short time. An hour later, same thing happens.
Problem
In the last few weeks usage of the application really started to grow. In the last few days we encountered very high load and doubling of application response times during and after the sending of these notifications (so basically every hour). The server doesn't crash or stop responding to requests, it just gets slower and slower and often takes 20 minutes to recover - until the same thing starts again at the full hour.
We have extensive monitoring in place (New Relic, collectd) but I can't figure out what's wrong; I can't find the bottlekneck. That's where you come in:
Can you help me figure out what's wrong and maybe how to fix it?
Additional information
The server is a 16 core Intel Xeon (8 cores with hyperthreading, I think) and 12GB RAM running Ubuntu 10.04 (Linux 3.2.4-20120307 x86_64). Apache is 2.2.x and PHP is Version 5.3.2-1ubuntu4.11.
If any configuration information would help analyze the problem, just comment and I will add it.
Graphs
info
phpinfo()
apc status
memcache status
collectd
Processes
CPU
Apache
Load
MySQL
Vmem
Disk
New Relic
Application performance
Server overview
Processes
Network
Disks
(Sorry the graphs are gifs and not the same time period, but I think the most important info is in there)
The problem is almost certainly MySQL based. If you look at the final graph mysql/mysql_threads you can see the number of threads hits 200 (which I assume is your setting for max_connections) at 20:00. Once the max_connections has been hit things do tend to take a while to recover.
Using mtop to monitor MySQL just before the hour will really help you figure out what is going on but if you cannot install this you could just using SHOW PROCESSLIST;. You will need to establish your connection to mysql before the problem hits. You will probably see lots of processes queued with only 1 process currently executing. This will be the most likely culprit.
Having identified the query causing the problems you can attack your code. Without understanding how your application is actually working my best guess would be that using an explicit transaction around the problem query(ies) will probably solve the problem.
Good luck!
There is an action in the admin section of a client's site, say Admin::Analytics (that I did not build but have to maintain) that compiles site usage analytics by performing a couple dozen, rather intensive database queries. This functionality has always been a bottleneck to application performance whenever the analytics report is being compiled. But, the bottleneck has become so bad lately that, when accessed, the site comes to a screeching halt and hangs indefinitely. Until yesterday I never had a reason to run the "top" command on the server, but doing so I realized that Admin::Analytics#index causes mysqld to spin at upwards of 350+% CPU power on the quad-core, production VPS.
I have downloaded fresh copies of production data and the production log. However, when I access Admin::Analytics#index locally on my development box, while using the production data, it loads in about 10 - 12 seconds (and utilizes ~ 150+% of my dual-core CPU), which sadly is normal. I suppose there could be a discrepancy in mysql settings that has suddenly come into play. Also, a mysqldump of the database is now 531 MB, when it was only 336 MB 28 days ago. Anyway, I do not have root access on the VPS, so tweaking mysqld performance would be cumbersome, and I would really like to get to the exact cause of this problem. However, the production logs don't contain info. on the queries; they merely report the length that these requests took, which average out to a few minutes apiece (although they seemed to have caused mysqld to stall for much longer than this and prompting me to request our host to reboot mysqld just to get our site back up in one instance).
I suppose I can try upping the log level in production to solicit info. on the database queries being performed by Admin::Analytics#index, but at the same time I'm afraid to replicate this behavior in production because I don't feel like calling our host up to restart mysqld again! This action contains a single database request in its controller, and a couple dozen prepared statements embedded in its view!
How would you proceed to benchmark/diagnose and optimize/fix this action?!
(Aside: Obviously I would like to completely replace this functionality with Google Analytics or a similar solution, but I need fix this problem before proceeding.)
I'd recommend taking a look at this article:
http://axonflux.com/building-and-scaling-a-startup
Particularly, query_reviewer and newrelic have been a life-saver for me.
I appreciate all the help with this, but what turned out to be the fix for this was to implement a couple of indexes on the Analytics table to cater to the queries in this action. A simple Rails migration to add the indexes and the action now loads in less than a second both on my dev box and on prod!