RTweet new lookup_users continued excess error - rtweet

I was using R's RTweet lookup_users to successfully pull data on 6 IDs
test<-c("ID1","ID2","ID3","ID4","ID5","ID6")
detail_followers <- lookup_users(users=test,
parse=TRUE,
token=[bearer_token here],
retryonratelimit=TRUE,
verbose=TRUE)
Update: I created a loop so I could get more visibility on the run, which made it profoundly slow. This would be tolerable, except for two new problems. First, some lookup_user are missing the "profile_banner_url" variable. I built a function to check for this problem and insert a NULL value for those users missing it, further slowing the run.
Finally, many users in this set appear to be bots, as they are deleted within 24 hours of my having just pulled them from the API. The error provided from lookup_users is "Error: $ operator is invalid for atomic vectors". is.atomic eval is not useful, because the fault occurs when lookup_user is called. There are far too many of these fake/suspended/deleted users for manual prompts.
These may be issues with the new RTweet from GitHub.

Related

Why is this error causing a loop instead of exiting the function?

I was casting around SE and the web yesterday looking for a way to concatenate specific fields from multiple records together as part of a query/report structure in my Access database. I found this SE question which led me to Allen Browne's ConcatRelated() function here. After getting off on the wrong foot by naming the module the same as the function, I got it working as desired.
However, On my initial attempt to use it in a query, I made a dumb mistake and fed it parameters that equated to "WHERE Employee = " & [Employee] instead of the correct'WHERE Employee = ' & [Employee] & "'" necessary to evaluate as a string. This resulted in the expected 3464 (data type mismatch) runtime error but with Allen's modifications it showed the actual string that caused the error. No surprise, I'll just click ok and go fix the SQL. I was surprised to find that didn't work. As soon as I clicked away the msgbox it would come back again. I used Ctrl+Break to stop the code but clicking End caused it to go right back to the same error rather than actually ending the Function. The only way I could stop the loop was by clicking Debug which drops me into the code as expected but I couldn't get out of the code to change the original Query. Hitting reset caused the error to pop again and put me right back in the loop.
I eventually got out of it by commenting everything out of the Error Handler and telling it to Exit Function. I knew that most likely was only masking the problem so I added a counter and had it increment every time the error handler was called then checked the value in the Immediate window. Sure enough, it would go up by 10 or more each time I clicked somewhere in the query.
What I don't understand is what is causing the error to loop. Err_Handler should dump it out of the function on the first error since there's nothing telling it to attempt to Resume and it specifically directs it to Exit_Handler which resets a couple variables and then does Exit Function. Even directly putting Exit Function in Err_Handler still causes multiple increments to the my error counter. Further experimenting with Debug highlights the line Set rs = DBEngine(0)(0).OpenRecordset(strSql, dbOpenDynaset). I don't know that it helps that much since it's the first line after the function constructs the value of strSql. Commenting it out caused the next line to be highlighted.
I know why caused the initial error (my mistake) and that's an easy fix. I'm more interested in the unintended result of the mistake and the loop it created.
tl;dr
What would cause this error to loop instead of exiting or breaking?
Why would it seem to occur a finite number of times if error handling calls an immediate exit but adding something like msgbox first seems to loop infinitely, or at least until I got tired of clicking.
Is there a way to circumvent this in the code so that it actually stops after the fist error?
I initially was using a table with 600+ records but then experimented further with a small table of only 10 records.
Small Table Query SELECT TestEmpHistory.Employee, ConcatRelated("EmpGroup", "TestEmpHistory", "Employee = " & Employee) AS CompliedNames
FROM TestEmpHistory to reproduce the error.
The table TestEmpHistory is in the following format:
ID(AutoNum)|Employee|EmpGroup| Other fields not referenced in the query
ID|Employee | EmpGroup|
1 |Employee1| G&A
2 |Employee1| Sales
3 |Employee2| CSR
4 |Employee2| G&A
5 |Employee3| CSR
6 |Employee3|Programming
7 |Employee3| G&A
8 |Employee4| CSR
9 |Employee4| CSR
10|Employee4| Programming
I'm using Access 2016
Without reading the whole post, I'm guessing the reason you're in a loop is because it's running this code for every row returned by the query. The only way to get out of this is, when you break into the code temporarily comment it all out so that the query can complete without generating the error for every row. Once this has completed, you can go back and correct your query design before un-commenting the function code and trying again.

SSIS Errors for simple CSV Data Flow

Sorry to darken your day with my troubles, but SSIS has broken me! I am new to SSIS and I just seem to be misunderstanding it.
For background: I have a few versions of a basic package that includes a Foreach Loop container and a Data Flow with a few Derived Columns that imports CSV files into a SQL Server Staging table. It is very straightforward and does include an Execute SQL task and a File Move but those work fine. The issues are with the Foreach loop and the Data Flow.
I have one version of this package (let’s call it “A”) that seemed to be working fine. It would process multiple files in a folder, insert records into the staging table, properly execute the SQL Statements, and move the files to Archive. Everything seemed fine until I carefully QA’d the process. Turns out it was duplicating the data from one file, and never importing the data from a second Source File! Yet, the second/dupe round of data included the Source Filename (via a derived column) of the second file (but the data from the first). So it looked like I had successfully processed BOTH files until I looked at the actual data and saw that none of the values from the second source file were ever written to the Staging table.
Once I discovered this, I figured that the problem was in the Foreach loop and how I setup the different file path & name variables. So, I decided to try to make a new version of the package. I started by copying package A and created package B. In B, I deleted the Source Connection manager and created a new Connection Manager along with all new file & path variables. I then tried to cleanup/fix/replace various elements in my Data Flow and Foreach loop. In the process, I discovered that the Advanced Mappings from A – which DID work – were virtually all setup as String (even the Currency and Date columns). That did not seem right, so I modified each source money column by changing to data type Currency, and changed each date-related column to data type Date.
What followed has been dozens and dozens of Errors and I cannot get Package B to run. I have even changed all of the B data types back to String (mirroring the setup in Package A which DID work). But, still no joy.
This leads me to ask a few questions to those of you smarter than I:
1) Why can’t SSIS interpret Source CSV data using the proper data type? I.e. why do I need to set every Input column as a STRING when some columns are clearly & completely Numeric, Currency or Dates? (Yes, the Source CSV files are VERY clean – most don’t even have NULLS)
a. When I do change the Advanced mapping for a date-related Source column to Date, I get the ever present error message: [Flat File Source [30]] Error: Data conversion failed. The data conversion for column "Settle Date" returned status value 2 and status text "The value could not be converted because of a potential loss of data.".
2) When I reset the data types back to String in package B, I still get errors – usually Truncation errors (and Yes – I have adjusted the length to 250 in one of these columns).
a. Error Message: "The value could not be converted because of a potential loss of data.".
b. When I reset the Mappings to ignore the column (as a test), it throws a similar error at the next column.
3) Any ideas why Package A would dupe a file’s data and not process the second file, yet throw no errors and move both to Archive?
4) Why does the Data Viewer appear to have parsing errors (it shows data in the wrong columns) but when you use the Copy data feature in the data viewer and paste it into Excel, all of the data lines up perfectly?
5) Are there any tips & tricks that a rookie SSIS user needs to understand and which might not be apparent through the documentation and searching web articles as well as this site?
I can provide further details if they will help, but these packages are really very simple and should not be causing me this much frustration.
THANKS for any insights.
DGP
Wow seems like you have a lot of ssis issues... I think the reason for the same file being extracted is because of the the way your 'variable mappings' is defined.
Have you had a look and followed this guide:
https://www.simple-talk.com/sql/ssis/ssis-basics-introducing-the-foreach-loop-container/
Hope this helps.
Shaheen
Thanks Tab & Shaheen,
To all SSIS rookies - please learn from my mistakes!
It appears that my issue was actually in how I identified the TEXT QUALIFIER in the Connection Manager. I had entered "" and that was causing problems with how my columns were being parsed. The parsing issues caused unexpected values to appear in some of the columns and that was causing the errors in the package.
When I tried changing the the Text Qualifier to only ONE double quote - " - the whole thing worked!
As I mentioned - and as Shaheen suspected - my initial issues with the duplicate processing was probably due to how I setup the foreach loop. I had already fixed that, bit was still getting errors until I fixed the Text Qualifier.
I have only tested it a few times but it looks like that was the issue.
Thanks for the contributions.
DGP

zabbix regex to trigger for wrong data type

I have an item of type float, but sometimes a string is received in case of error instead of a number. How can I make a trigger regexp to fire in this case?
I have no idea now to check for "wrong data type".
Actually this is by design and what I'm trying to do is this: if the data gathering fails, I send an error message in order to see it on zabbix end.
I tried with nodata(0), but this doesn't seem to work.
In you case zabbix will not store the "wrong" value for the item. And if you don't care what the string is then you can just setup a trigger for "nodata" for the period of your interval. Look in the triggers manual and search for the "nodata".
Edit: scratch that, didn't read the whole question ....
Edit2: if you are certain that this is not working by design and not because your trigger interval misses the data interval, then you can try to catch the unsupported status. There is an open request for the functionality, but you can setup a side script similar to this. Or you can wrap the monitored item on the node into a UserParameter script that reads the value and prints -1 or something if it is not a number. Then proceed with a normal numeric trigger.

PowerBuilder Find function throws an error "expression is not valid"

What is wrong with this code. I am checking whether there is an available record in the database before inserting a new serial number. When I enter any record whether available or not, it throws an error message:
"Expression is not valid". (PowerBuilder Classic 12.5 and SQL Server
2008)
If This.GetColumnName() = "serial_No" Then
long ll_serial
ll_serial=dw_newrecord.find(data, 1, dw_newrecord.rowcount())
if ll_serial>0 then
messagebox("validation error", "The record already exists")
return 1
end if
End If
It is likely that your data expression has a syntax error. It can be some misformed code -like missing quotes- or maybe that the column name is incorrect.
To help for tuning a filter or find expression, you can test it in the datawindow design screen via the Rows / filter menu.
A better solution for long-term coding design would be to integrate the Datawindow Debug Machine (made by a colleague of mine) to your project. It is a precious tool to prototype datawindow expressions for finding, filtering but also for dynamic objects creation / modification in a datawindow. And while correctly interfaced with a datawindow ancestor of your project, it can help with filters and find expression errors like here.
EDIT: As RealHowTo noticed, the tool has been updated. Here is the current latest version (but there is no updated demo screencast though).

Linq 2 SQL on shared host

I recently ran into an issue with linq on a shared host.
The host is Shared Intellect and they support v3.5 of the framework. However, I am uncertain to whether they have SP1 installed. My suspicion is that they do not.
I have a simple News table that has the following structure:
NewsID uniqueidentifier
Title nvarchar(250)
Introduction nvarchar(1000)
Article ntext
DateEntered datetime (default getdate())
IsPublic bit (default true)
My goal is to display the 3 most recent records from this table. I initially went the D&D method (I know, I know) and created a linq data-source and was unable to find a way to limit the results the way I desired, so I removed that and wrote the following:
var dc = new NewsDataContext();
var news = from a in dc.News
where a.IsPublic == true
orderby a.DateEntered descending
select new { a.NewsID, a.Introduction };
lstNews.DataSource = news.Take(3);
lstNews.DataBind();
This worked perfectly on my local machine.
However, when I uploaded everything to the shared host, I recieved the following error:
.Read_<>f__AnonymousType0`2
(System.Data.Linq.SqlClient.Implementation.ObjectMaterializer`1<System.Data.SqlClient.SqlDataReader>)
Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code.
Exception Details: System.MethodAccessException:
.Read_<>f__AnonymousType0`2
(System.Data.Linq.SqlClient.Implementation.ObjectMaterializer`1<System.Data.SqlClient.SqlDataReader>)
I tried to search the error on Google, but met with no success. I then tried to modify my query in every way I could imagine, removing various combinations of the where/orderby parameters as well as limiting my query to a single column and even removing the Take command.
My Question therefore comes in 3 parts:
Has anyone else encountered this and if so, is there a "quick" fix?
Is there a way to use the datasource to limit the rows?
Is there some way to determine what version of the framework the shared host is running short of emailing them directly (which I have done and am awaiting an answer)
System.MethodAccessException is thrown by the framework when it is missing an assembly, or one of the references are of the wrong version.
The first thing I would do is try uploading and referencing your code to the LINQ assemblies in your BIN, instead of the shared hosting providers GAC.