Modify content records in Bolt CMS - bolt-cms

I've added a 'product' content type to my Bolt installation. I want to track and update inventory when a purchase is made, so I added an integer field 'available'. I set up a test controller to modify the record. Everything appears to work, but the update never happens. What am I missing?
<?php
namespace Bundle\Site;
class CartController extends \Bolt\Controller\Base
{
public function addRoutes(\Silex\ControllerCollection $c)
{
$c->match('/test-save', [$this,'testSave']);
return $c;
}
public function testSave()
{
$result=false;
$repo = $this->app['storage']->getRepository('products');
$content = $repo->find(1);
//Error log output confirms that this is the correct record
error_log(get_class($this).'::'.__FUNCTION__.': '.json_encode($content));
$content->set('available',15);
$content->setDatechanged('now');
$result=$repo->save($content); //returns 1
return new \Symfony\Component\HttpFoundation\Response(json_encode($result), \Symfony\Component\HttpFoundation\Response::HTTP_OK);
}
}

It turns out that my code works. I was checking the result by reloading the back-end edit form, which had apparently cached its data, so I didn't see the value changing.
The same thing happens if I load the edit form in two browser tabs and update one, then refresh the other, so it looks like this is the best I can do.

I have faced a similar problem. Either the updated codes didn't show up or the result in the output is unchanged. Figured though this happened due to setting debug: false in the config.yml file. When debug is true your in bolt-data/app/ all cache files increases but this thing get sorted.
In production it is suggested that to keep debug: false but this problem checks-in too, hence the way you are dealing with it comes out to be the easiest way out without changing the configuration much.

Related

How do I debug lua functions called from conky?

I'm trying to add some lua functionality to my existing conky setup so that repetitive "code" in my conky text can be cleaned up. For example, I have information for each mounted FS, each core, etc. where each row displayed in my panel differs ONLY by one parameter.
My first skeletal, attempt at using lua functions for this seems to run but displays nothing in my panel. I've only found very simple examples to base this on, so I may have made a simple error, but I don't even know how to diagnose it. My code here is modeled after what I HAVE been able to find regarding writing functions, such as this How to implement a basic Lua function in Conky? , but that's about all the depth I've found on the topic except for drawing and cairo examples.
Here's the code added to my conky config, as well as the contents of my functions.lua file
conky.config = {
...
lua_load = '/home/conky-manager/MyConky/functions.lua',
};
conky.text = [[
...
${voffset 5}${lua conky_test 'test'}
...
]]
file - functions.lua
function conky_test(parm1)
return 'result text'
end
What I would expect is to see is "result text" displayed in my panel at the location where that function call appears, but nothing shows.
Is there a log created by conky as it runs, or a way to provide some debug output? Even if I'd made a simple error here, I'd still like to have the ability to diagnose things as my code gets more complex.
Success!
After cobbling info from several articles together, I figured out my basic flaws -
1. Missing a 'conky_main' function,
2. Missing a 'lua_draw_hook_post' to invoke it, and
3. Realizing that if I invoke conky from a terminal, print statements in lua would appear there.
So, for anyone who sees this question and has the same issues, here's the corrected code.
conky.config = {
...
lua_load = '/home/conky-manager/MyConky/functions.lua',
lua_draw_hook_post = "main",
};
conky.text = [[
...
${lua conky_test 'test'}
...
]]
and the proper basics in my functions.lua file
function conky_test(parm1)
return 'result text'
end
function conky_main()
if conky_window == nil then
return
end
end
A few notes:
I still haven't determined if using 'lua_draw_hook_pre' instead of 'lua_draw_hook_post' makes any difference, but it doesn't seem to in this example.
Also, some examples showed actually calling this 'test' function instead of writing a 'main', but the 'main' seemed to have value in checking to see if conky_window existed.
Some examples seemed to state that naming functions with the prefix 'conky_' was required, but then showed examples of calling those functions without the prefix, so I assume the prefix is inferred during the call.
a major note: you should run conky from the directory containing the lua scripts.

Conditional cases in cypress

I have a problem for writing tests for conditional cases. One of the test uses the api in 'before' (beforeAll) function to create an object, and then in test the object that was created is not shown in searching result sometimes. I was using puppeteer before. I can let the page reload until the object shows in the search result. However, there is no way for me to do the same thing. I was thinking of using cy.get and then checking the response. For instance, (cy.get('sth').then(s1 => {do something like cy.reload()})).
Then, I found out that s1 always kept the same after reload. So, I am stuck. Hope someone give me a hand to it. If the description is not clear, please the my another post below. Thanks
Im sorry but the problem is in your before function. You gotta start it only when you are done preparing your environments to run your test.
const createPerson = (params, done)=>{
cy.request({// create people here}).then(({people})=>{
done(undefined, {people});
})
}
before((done)=> {
createPerson({}, (err, {people})=>{
cy.visit('mywebsite);
login({}, done); // this should be async as well
});
})
it('search person I created by calling api', () => {
//if you data is not cached, at this point you will have the people populated in your screen.
cy.get('.search')
.type('person's name{enter}');
cy.get(':nth-child(1) > resulttable').click();
Based on what I'm gathering from your post, I think something like this may help:
cy.get('some.selector').then(elem => {
// ...
});
cy.reload();
cy.get('some.selector').then(elem => {
// run code on element after reloading...
});
If this does not answer your question, please consider making a minimal, complete and verifiable example.
My bad. I didn't explain my problem well. The problem is not in before() function. In before() function, I call post API to create persons.
Then, I will use those persons created in before() function in each of my tests.
The code would look like:
before(()=> {createPerson(); cy.visit('mywebsite); login();} )
it('search person I created by calling api', () => {
cy.get('.search')
.type('person's name{enter}');
cy.get(':nth-child(1) > resulttable').click();
Here is the problem. I can't find the person in search result since the data needs time to be passed. Then, the test fails.
So, I need reload the page (The page is searching result page) by calling
cy.reload();
However, I don't know how many reload I need to call above to let the person show up in the searching result.
The current solution I used is to cy.wait(30000). wait 30 seconds.
So, I am wondering how I am going to do right now.

Service Worker slow response times

In Windows and Android Google Chrome browser, (haven't tested for others yet) response time from a service worker increases linearly to number of items stored in that specific cache storage when you use Cache.match() function with following option;
ignoreSearch = true
Dividing items in multiple caches helps but not always convenient to do so. Plus even a small amount of increase in items stored makes a lot of difference in response times. According to my measurements response time is roughly doubled for every tenfold increase in number of items in the cache.
Official answer to my question in chromium issue tracker reveals that the problem is a known performance issue with Cache Storage implementation in Chrome which only happens when you use Cache.match() with ignoreSearch parameter set to true.
As you might know ignoreSearch is used to disregard query parameters in URL while matching the request against responses in cache. Quote from MDN:
...whether to ignore the query string in the url. For example, if set to
true the ?value=bar part of http://example.com/?value=bar would be ignored
when performing a match.
Since it is not really convenient to stop using query parameter match, I have come up with following workaround, and I am posting it here in hopes of it will save time for someone;
// if the request has query parameters, `hasQuery` will be set to `true`
var hasQuery = event.request.url.indexOf('?') != -1;
event.respondWith(
caches.match(event.request, {
// ignore query section of the URL based on our variable
ignoreSearch: hasQuery,
})
.then(function(response) {
// handle the response
})
);
This works great because it handles every request with a query parameter correctly while handling others still at lightning speed. And you do not have to change anything else in your application.
According to the guy in that bug report, the issue was tied to the number of items in a cache. I made a solution and took it to the extreme, giving each resource its own cache:
var cachedUrls = [
/* CACHE INJECT FROM GULP */
];
//update the cache
//don't worry StackOverflow, I call this only when the site tells the SW to update
function fetchCache() {
return Promise.all(
//for all urls
cachedUrls.map(function(url) {
//add a cache
return caches.open('resource:'url).then(function(cache) {
//add the url
return cache.add(url);
});
});
);
}
In the project we have here, there are static resources served with high cache expirations set, and we use query parameters (repository revision numbers, injected into the html) only as a way to manage the [browser] cache.
It didn't really work to use your solution to selectively use ignoreSearch, since we'd have to use it for all static resources anyway so that we could get cache hits!
However, not only did I dislike this hack, but it still performed very slowly.
Okay, so, given that it was only a specific set of resources I needed to ignoreSearch on, I decided to take a different route;
just remove the parameters from the url requests manually, instead of relying on ignoreSearch.
self.addEventListener('fetch', function(event) {
//find urls that only have numbers as parameters
//yours will obviously differ, my queries to ignore were just repo revisions
var shaved = event.request.url.match(/^([^?]*)[?]\d+$/);
//extract the url without the query
shaved = shaved && shaved[1];
event.respondWith(
//try to get the url from the cache.
//if this is a resource, use the shaved url,
//otherwise use the original request
//(I assume it [can] contain post-data and stuff)
caches.match(shaved || event.request).then(function(response) {
//respond
return response || fetch(event.request);
})
);
});
I had the same issue, and previous approaches caused some errors with requests that should be ignoreSearch:false. An easy approach that worked for me was to simply apply ignoreSearch:true to a certain requests by using url.contains('A') && ... See example below:
self.addEventListener("fetch", function(event) {
var ignore
if(event.request.url.includes('A') && event.request.url.includes('B') && event.request.url.includes('C')){
ignore = true
}else{
ignore = false
}
event.respondWith(
caches.match(event.request,{
ignoreSearch:ignore,
})
.then(function(cached) {
...
}

What is the simple way to find the column name from Lineageid in SSIS

What is the simple way to find the column name from Lineageid in SSIS. Is there any system variable avilable?
I remember saying this can't be that hard, I can write some script in the error redirect to lookup the column name from the input collection.
string badColumn = this.ComponentMetaData.InputCollection[Row.ErrorColumn].Name;
What I learned was the failing column isn't in that collection. Well, it is but the ErrorColumn reported is not quite what I needed. I couldn't find that package but here's an example of why I couldn't get what I needed. Hopefully you will have better luck.
This is a simple data flow that will generate an error once it hits the derived column due to division by zero. The Derived column generates a new output column (LookAtMe) as the result of the division. The data viewer on the Error Output tells me the failing column is 73. Using the above script logic, if I attempted to access column 73 in the input collection, it's going to fail because that is not in the collection. LineageID 73 is LookAtMe and LookAtMe is not in my error branch, it's only in the non-error branch.
This is a copy of my XML and you can see, yes, the outputColumn id 73 is LookAtme.
<outputColumn id="73" name="LookAtMe" description="" lineageId="73" precision="0" scale="0" length="0" dataType="i4" codePage="0" sortKeyPosition="0" comparisonFlags="0" specialFlags="0" errorOrTruncationOperation="Computation" errorRowDisposition="RedirectRow" truncationRowDisposition="RedirectRow" externalMetadataColumnId="0" mappedColumnId="0"><properties>
I really wanted that data though and I'm clever so I can union all my results back together and then conditional split it back out to get that. The problem is, Union All is an asynchronous transformation. Async transformations result in the data being copied from one set of butters to another resulting in...new lineage ids being assigned so even with a union all bringing the two streams back together, you wouldn't be able to call up the data flow chain to find that original lineage id because it's in a different buffer.
Around this point, I conceded defeat and decided I could live without intelligent/helpful error reporting in my packages.
I know this is a long dead thread but I tripped across a manual solution to this problem and thought I would share for anyone who happens upon this same problem. Granted this doesn't provide a programmatic solution to the problem but for simple debugging it should do the trick. The solution uses a Derived Column as an example but this seems to work for any Data Flow component.
Answer provided by Todd McDermid and taken from AskSQLServerCentral:
"[...] Unfortunately, the lineage ID of your columns is pretty well hidden inside SSIS. It's the "key" that SSIS uses to identify columns. So, in order to figure out which column it was, you need to open the Advanced Editor of the Derived Column component or Data Conversion. Do that by right clicking and selecting "Advanced Editor". Go to the "Input and Output Properties" tab. Open the first node - "Derived Column Input" or "Data Conversion Input". Open the "Input Columns" tab. Click through the columns, noting the "LineageID" property of each. You may have to do the same with the "Derived Column Output" node, and "Output Columns" inside there. The column that matches your recorded lineage ID is the offending column."
For anyone using SQL Server versions before SS2016, here are a couple of reference links for a way to get the Column name:
http://www.andrewleesmith.co.uk/2017/02/24/finding-the-column-name-of-an-ssis-error-output-error-column-id/
which is based on:
http://toddmcdermid.blogspot.com/2016/04/finding-column-name-for-errorcolumn.html
I appreciate we aren't supposed to just post links, but this solution is quite convoluted, and I've tried to summarise by pulling info from both Todd and Andrew's blog posts and recreating them here. (thank you to both if you ever read this!)
From Todd's page:
Go to the "Inputs and Outputs" page, and select the "Output 0" node.
Change the "SynchronousInputID" property to "None". (This changes
the script from synchronous to asynchronous.)
On the same page, open the "Output 0" node and select the "Output
Columns" folder. Press the "Add Column" button. Change the "Name"
property of this new column to "LineageID".
Press the "Add Column" button again, and change the "DataType"
property to "Unicode string [DT_WSTR]", and change the "Name"
property to "ColumnName".
Go to the "Script" page, and press the "Edit Script" button. Copy
and paste this code into the ScriptMain class (you can delete all
other method stubs):
public override void CreateNewOutputRows() {
IDTSInput100 input = this.ComponentMetaData.InputCollection[0];
if (input != null)
{
IDTSVirtualInput100 vInput = input.GetVirtualInput();
if (vInput != null)
{
foreach (IDTSVirtualInputColumn100 vInputColumn in vInput.VirtualInputColumnCollection)
{
Output0Buffer.AddRow();
Output0Buffer.LineageID = vInputColumn.LineageID;
Output0Buffer.ColumnName = vInputColumn.Name;
}
}
} }
Feel free to attach a dummy output to that script, with a data viewer,
and see what you get. From here, it's "standard engineering" for you
ETL gurus. Simply merge join the error output of the failing
component with this metadata, and you'll be able to transform the
ErrorColumn number into a meaningful column name.
But for those of you that do want to understand what the above script
is doing:
It's getting the "first" (and only) input attached to the script
component.
It's getting the virtual input related to the input. The "input" is
what the script can actually "see" on the input - and since we
didn't mark any columns as being "ReadOnly" or "ReadWrite"... that
means the input has NO columns. However, the "virtual input" has
the complete list of every column that exists, whether or not we've
said we're "using" it.
We then loop over all of the "virtual columns" on this virtual
input, and for each one...
Get the LineageID and column name, and push them out as a new row on
our asynchronous script.
The image and text from Andrew's page helps explain it in a bit more detail:
This map is then merge-joined with the ErrorColumn lineage ID(s)
coming down the error path, so that the error information can be
appended with the column name(s) from the map. I included a second
script component that looks up the error description from the error
code, so the error table rows that we see above contain both column
names and error descriptions.
The remaining component that needs explaining is the conditional split
– this exists just to provide metadata to the script component that
creates the map. I created an expression (1 == 0) that always
evaluates to false for the “No Rows – Metadata Only” path, so no rows
ever travel down it.
Whilst this solution does require the insertion of some additional
plumbing within the data flow, we get extremely valuable information
logged when errors do occur. So especially when the data flow is
running unattended in Production – when we don’t have the tools &
techniques available at design time to figure out what’s going wrong –
the logging that results gives us much more precise information about
what went wrong and why, compared to simply giving us the failed data
and leaving us to figure out why it was rejected.
There is no simple way to find out column name by lineage id.
If you want to know this using BIDS You have to inspect all components inside dataflow using Advanced properties, Input and Output columns tab and see LineageID for each column and input/output path.
But You can:
inspect XML - this is very difficult
write .NET application and use FindColumnByLineageId
However, second solution includes a lot of coding and understanding of pipeline because You have to programmaticaly: open the package, iterate over tasks, iterate inside containers, iterate over transformations inside data flows to find particular component to use proposed method.
Here is a solution that:
Works at package runtime (not pre-populating)
Is automated through a Script Task and Component
Doesn't involve installing new assemblies or custom components
Is nicely BIML compatible
Check out the full solution here.
EDIT
Here is the short version.
Create 2 Object variables, execsObj and lineageIds
Create Script Task in Control flow, give it ReadWrite access to both variables
Add an assembly reference to your script task for Microsoft.SqlServer.DTSPipelineWrap.dll (may required SQL Client SDK be installed; required for the MainPipe object below)
Insert the following code into your Script Task
Dictionary<int, string> lineageIds = null;
public void Main()
{
// Grab the executables so we have to something to iterate over, and initialize our lineageIDs list
// Why the executables? Well, SSIS won't let us store a reference to the Package itself...
Dts.Variables["User::execsObj"].Value = ((Package)Dts.Variables["User::execsObj"].Parent).Executables;
Dts.Variables["User::lineageIds"].Value = new Dictionary<int, string>();
lineageIds = (Dictionary<int, string>)Dts.Variables["User::lineageIds"].Value;
Executables execs = (Executables)Dts.Variables["User::execsObj"].Value;
ReadExecutables(execs);
Dts.TaskResult = (int)ScriptResults.Success;
}
private void ReadExecutables(Executables executables)
{
foreach (Executable pkgExecutable in executables)
{
if (object.ReferenceEquals(pkgExecutable.GetType(), typeof(Microsoft.SqlServer.Dts.Runtime.TaskHost)))
{
TaskHost pkgExecTaskHost = (TaskHost)pkgExecutable;
if (pkgExecTaskHost.CreationName.StartsWith("SSIS.Pipeline"))
{
ProcessDataFlowTask(pkgExecTaskHost);
}
}
else if (object.ReferenceEquals(pkgExecutable.GetType(), typeof(Microsoft.SqlServer.Dts.Runtime.ForEachLoop)))
{
// Recurse into FELCs
ReadExecutables(((ForEachLoop)pkgExecutable).Executables);
}
}
}
private void ProcessDataFlowTask(TaskHost currentDataFlowTask)
{
MainPipe currentDataFlow = (MainPipe)currentDataFlowTask.InnerObject;
foreach (IDTSComponentMetaData100 currentComponent in currentDataFlow.ComponentMetaDataCollection)
{
// Get the inputs in the component.
foreach (IDTSInput100 currentInput in currentComponent.InputCollection)
foreach (IDTSInputColumn100 currentInputColumn in currentInput.InputColumnCollection)
lineageIds.Add(currentInputColumn.ID, currentInputColumn.Name);
// Get the outputs in the component.
foreach (IDTSOutput100 currentOutput in currentComponent.OutputCollection)
foreach (IDTSOutputColumn100 currentoutputColumn in currentOutput.OutputColumnCollection)
lineageIds.Add(currentoutputColumn.ID, currentoutputColumn.Name);
}
}
Create Script Component in Dataflow with ReadOnly access to lineageIds and the following code.
public override void Input0_ProcessInputRow(Input0Buffer Row)
{
Dictionary<int, string> lineageIds = (Dictionary<int, string>)Variables.lineageIds;
int? colNum = Row.ErrorColumn;
if (colNum.HasValue && (lineageIds != null))
{
if (lineageIds.ContainsKey(colNum.Value))
Row.ErrorColumnName = lineageIds[colNum.Value];
else
Row.ErrorColumnName = "Row error";
}
Row.ErrorDescription = this.ComponentMetaData.GetErrorDescription(Row.ErrorCode);
}

Extending CodeIgniter Security.php to enable logging

TLDR; I want to enable database-logging of xss_clean() when replacing evil data.
I want to enable database logging of the xss_clean() function in Security.php, basically what I want to do is to know if the input I'm feeding xss_clean() with successfully was identified to have malicious data in it that was filtered out or not.
So basically:
$str = '<script>alert();</script>';
$str = xss_clean($str);
What would happen ideally for me is:
Clean the string from XSS
Return the clean $str
Input information about the evil data (and eventually the logged in user) to the database
As far as I can see in the Security.php-file there is nothing that takes care of this for me, or something that COULD do so by hooks etc. I might be mistaken of course.
Since no logging of how many replaces that were made in Security.php - am I forced to extend Security.php, copy pasting the current code in the original function and altering it to support this? Or is there a solution that is more clean and safe for future updates of CodeIgniter (and especially the files being tampered/extended with)?
You would need to extend the Security class, but there is absolutely no need to copy and paste any code if all you need is a log of the input/output. Something along the lines of the following would allow you to do so:
Class My_Security extends CI_Security {
public function xss_clean($str, $is_image = FALSE) {
// Do whatever you need here with the input ... ($str, $is_image)
$str = parent::xss_clean($str, $is_image);
// Do whatever you need here with the output ... ($str)
return $str;
}
}
That way, you are just wrapping the existing function and messing with the input/output. You could be more forward compatible by using the PHP function get_args to transparently pass around the arguments object, if you were concerned about changes to the underlying method.