I've grabbed a load of tweets using the sample api. That's great. I know how to 'chuck' the tweets that are actually notifications of a tweet being deleted, but there lots of other cases, and who knows when another will be added? The cases are infact mentioned here:
Tweets can be embedded, replied to, favorited, unfavorited, retweeted, unretweeted and deleted.
How do I know though that I've got a POTO (Plain Old Twitter Objects) and not a delete, favourite etc. ?
Is there a standard process that could identify this? The Delete tweet for instance starts with {"delete":{.... so I can match for that as a string or try and look it up as a key (on JSON-Simple, it's done like so). What would be nice if all the POTOs started
{"tweet":{.... then I could pick them out, not have to discard anything that wasn't a POTO(needing a check for every single non-POTO).
I could just use the key checker to find every key that I need, and hope that it's relevant and correct (for instance both a the tweet body and the tweeter/user has an id, unless it's a delete then it's a id and user_id. To get both you need to go into the entities). But if I'm using Storm (and I am going to be eventually) I may end up plugging a bolt on later on, I then have to go back and change my checks.
So, is there a simple way to distinguish a tweet that's a tweet (POTO) and not something else?
Related
in vue.js app the main focus is working with prospects. prospects have many things like contacts, listings, and half a dozen other objects/tables.
they also have interactions, which could have 30 or more per prospect, while most things like emails or phones would have 1-3 results. I load 50 prospects at a time in to the front end
I'm trying to decide if loading it all into the front end to work 50 prospects at a time is a good idea, or if i should have a json column with interactions as part of the prospects table that i would update each time an interaction is saved, with minimal info like date, type, subject...
it seems like an extra step (and duplicate data, how important is that?) to update the json column with each interaction, but also seems like it would save looking up and loading data all the time
I'm not a programmer, but have been teaching myself how to do something i need done for my business with tutorials and youtube, any opinions from those who deal with this professionally would be appreciated
also, if anyone wants to tell me how to ask this question in a better formatted way, I'm open ears
thanks
Imagine if you have 1000 data, but you are sending only 50 of them, and your user do a filter by price. Will you display only the filtered data from 50 or 1000 of them?
That depends on whether you want to expose all 1000 data to the front end. It's a choice between that, and calling your server api everytime.
If you are calling the server, consider using a cache like Redis to store your results .
Pseudo code.
Request Received
Check Redis Cache - Redis.get('key')
If key exist - return cache.
Else -
check mysql for latest results.
Redis.set('key', latest results);
CreateRequest Received
- Write to mysql
- Redis.delete('key') // next request to view will create new cache with fresh data.
Your key can be anything like, e.g your url ('/my/url')
https://laravel.com/docs/8.x/redis
How to generate MySQL Querys with LUIS and fetch data from the DB hosted in Azure?
Should generate a natural language query to an MySQL Query.
e.g.
How much beer was drunken on the oktoberfest 2018?
--> GET amountOfBeer FROM Oktoberfest WHERE Year ==2018;
Does anyone has an idea how to get this to work?
Already generated small Intents in LUIS e.g. GetAmountOfBeer
Dont know how to generate the MySQL Statements and how to get the data from the DB.
Thanks.
You should be able to achieve this, or something similar, using intents and entities. How successful this can be depends on how many and how diverse your queries need to be. First lets start with the phrase you mentioned: "How much beer was drunken on the oktoberfest 2018". You can easily (as you've done) add this as an utterance for an intent, GetAmountOfBeer. Though I'm a fan of intent names that you can read as "I want to GetAmountOfBeer", here you may want to name the intent amountOfBeer so you can use it in your query directly.
Next you need to set up you entities. For year (or datetime rather) that should be easy, as I believe there are some predefined entities for this. I think you need to use a datetime recognizer to parse out the right attribute (like year), but I haven't tried to do this before. Next, Oktoberfest seems to be a specific holiday or event in your DB, so you could create a list entity of all the events you have.
What you are left with is something like (pseudocode) GET topIntent FROM eventEntity WHERE Year ==datetime.Year, or something like that.
If your query set is more complex, you might have to have multiple GET statements, but you could put those in a switch statement by topIntent so that, no matter what the intent is, you can parse out the correct values. You also might want to build this into a dialog where you can check if the entities exist, and if not, you can prompt the user for the missing data.
I have a MySQL table which stores scores of the users. Every time a user answer a question correctly, I add his or her score by one using AJAX request. The request sends just an integer number which is the id of the question.
My Question is: How to prevent fake AJAX requests?
As it is just an integer number I can't check if it is a fake request or not. So the only solution I come up with is to add an extra column to my table, named "yesterday_score", as its name describe it is a column that change at time 00:00 and save users score. If a user add his score more than 300 in a day, I assume it is a hack, and I prevent it.
Check then answer with your back end to increment, not with the front end.
Never trust user input it the rule number one!
Rather than sending the number to the database you can use the language your database uses to update the number. So in MySQL
UPDATE users SET score = score + 1 WHERE user_id = 12
user_id can be verified by comparing it with the session or something of the sort. Be sure to use prepared statements too.
I read a lot of related pages, some user suggested some kinds of solutions like: "If a user hits 10 headshots in 10ms then you kick him. Write a clever cheat detection algorithm."
And there is an answer in same question:
There is no way to avoid forged requests in this case, as the client
browser already has everything necessary to make the request; it is
only a matter of some debugging for a malicious user to figure out how
to make arbitrary requests to your backend, and probably even using
your own code to make it easier. You don't need "cryptographic
tricks", you need only obfuscation, and that will only make forging a
bit inconvenient, but still not impossible.
in this page :
How to block external http requests? (securing AJAX calls)
I might also use PHPIDS. But for know I think I will stick with my solution, I add another column and hold the user's "yesterday-score" and if the user get more than 100 score today I will know he is defenetly cheating so I won't increment extra score.
Add a hidden field to form, and put in - md5(session_id())
if answer is correct - session_regenerate_id();
I am developing an app with PhoneGap and have been storing the user id and user level in local storage, for example:
window.localStorage["userid"] = "20";
This populates once the user has logged in to the app. This is then used in ajax requests to pull in their information and things related to their account (some of it quite private). The app is also been used in web browser as I am using the exact same code for the web. Is there a way this can be manipulated? For example user changes the value of it in order to get info back that isnt theirs?
If, for example another app in their browser stores the same key "userid" it will overwrite and then they will get someone elses data back in my app.
How can this be prevented?
Before go further attack vectors, storing these kind of sensitive data on client side is not good idea. Use token instead of that because every single data that stored in client side can be spoofed by attackers.
Your considers are right. Possible attack vector could be related to Insecure Direct Object Reference. Let me show one example.
You are storing userID client side which means you can not trust that data anymore.
window.localStorage["userid"] = "20";
Hackers can change that value to anything they want. Probably they will changed it to less value than 20. Because most common use cases shows that 20 is coming from column that configured as auto increment. Which means there should be valid user who have userid is 19, or 18 or less.
Let me assume that your application has a module for getting products by userid. Therefore backend query should be similar like following one.
SELECT * FROM products FROM owner_id = 20
When hackers changed that values to something else. They will managed to get data that belongs to someone else. Also they could have chance to remove/update data that belongs to someone else agains.
Possible malicious attack vectors are really depends on your application and features. As I said before you need to figure this out and do not expose sensitive data like userID.
Using token instead of userID is going solved that possible break attemps. Only things you need to do is create one more columns and named as "token" and use it instead of userid. ( Don't forget to generate long and unpredictable token values )
SELECT * FROM products FROM owner_id = iZB87RVLeWhNYNv7RV213LeWxuwiX7RVLeW12
I'm about to implement a list of topic/argument in my forum, and I'd like to insert a sort of flag like "read/not read yet" for each message, regard each user in my website.
I think at somethings like this : a table watched_topics with id(INT), user(VARCHAR) and topic_id(INT). When a user watch the page, I'll insert (if the data doesn't exist) these information.
When another user will insert a new message in a topic, I'll delete from the table watched_topics all line with that topic_id.
That could provide a trouble : Think about to 9000 topics and 9000 users that have watched all topics : the table will be so big (9000x9000=81000000).
So, I think is not the best strategy to implement this kind of stuff! Any suggestion would be appreciated :)
Cheers
May I suggest a different approach?
Make use of web browser history mechanism.
Every topic can get a new, unique URL every time a new message is added there. It could include the number of messages, last modified time or a combination of both.
If the user did see the topic, he must have visited it, so a properly set up CSS can help identifying the read ones. You can even use some client-side scripts to modify the behaviour of the page based on that.
Another way to do that would be to keep the watched topics table the way you want to do it, but also store last visit time in user's profile and show all topics as read that haven't changed since that time.
However it's pretty safe to assume that all users reading all topics is very unlikely.
Your suggestion sounds good. I would make user-field also a foreign key - it gives you a bit more flexibility.
Are you sure all 9000 topics are read by all 9000 users? I mean is this reality? Like you said, topic-entries are deleted when new message is added. And when that happens, another 9000 entries are deleted :)
I would index the table and go with your suggestion (with user_id change). If the table size gets in your way, you can always change the implementation later. Most likely it will never be the issue anyway.
For the deletion: you could save what the latest msg-ID was the user saw. This way you do not have to perform a lot of delete actions every time a msg is posted in a much-viewed topic.