I am trying to compute the DFT transform of a series of 16-bit input values using the Xilinx FFTv8.0 core on a Virtex 7 but I have some troubles understanding the datasheet.
More specifically, I am using a standard auto-generated testbench (see below) but the output is always zero. Even after being through the datasheet and the "Jim Wu's FPGA Blog" (http://myfpgablog.blogspot.de/2010/07/fft-results-from-matlab-fft-bit.html) many times, I still don't understand how to use it. I think I am confused by the multiple input/output of the core..
`timescale 1ns / 1ps
////////////////////////////////////////////////////////////////////////////////
// Company:
// Engineer:
//
// Create Date: 14:25:20 05/14/2015
// Design Name: fft_core
// Module Name: C:/Users/Alberto/Documents/MEGA/Master II/Master Thesis/test_fft/fft_tb.v
// Project Name: test_fft
// Target Device:
// Tool versions:
// Description:
//
// Verilog Test Fixture created by ISE for module: fft_core
//
// Dependencies:
//
// Revision:
// Revision 0.01 - File Created
// Additional Comments:
//
////////////////////////////////////////////////////////////////////////////////
module fft_tb;
// Inputs
reg aclk;
reg s_axis_config_tvalid;
reg s_axis_data_tvalid;
reg s_axis_data_tlast;
reg m_axis_data_tready;
reg [7:0] s_axis_config_tdata;
reg [31:0] s_axis_data_tdata;
// Outputs
wire s_axis_config_tready;
wire s_axis_data_tready;
wire m_axis_data_tvalid;
wire m_axis_data_tlast;
wire event_frame_started;
wire event_tlast_unexpected;
wire event_tlast_missing;
wire event_status_channel_halt;
wire event_data_in_channel_halt;
wire event_data_out_channel_halt;
wire [31:0] m_axis_data_tdata;
// generate clk
always #5 aclk =! aclk;
// Instantiate the Unit Under Test (UUT)
fft_core uut (
.aclk(aclk),
.s_axis_config_tvalid(s_axis_config_tvalid),
.s_axis_data_tvalid(s_axis_data_tvalid),
.s_axis_data_tlast(s_axis_data_tlast),
.m_axis_data_tready(m_axis_data_tready),
.s_axis_config_tready(s_axis_config_tready),
.s_axis_data_tready(s_axis_data_tready),
.m_axis_data_tvalid(m_axis_data_tvalid),
.m_axis_data_tlast(m_axis_data_tlast),
.event_frame_started(event_frame_started),
.event_tlast_unexpected(event_tlast_unexpected),
.event_tlast_missing(event_tlast_missing),
.event_status_channel_halt(event_status_channel_halt),
.event_data_in_channel_halt(event_data_in_channel_halt),
.event_data_out_channel_halt(event_data_out_channel_halt),
.s_axis_config_tdata(s_axis_config_tdata),
.s_axis_data_tdata(s_axis_data_tdata),
.m_axis_data_tdata(m_axis_data_tdata)
);
initial begin
// Initialize Inputs
aclk = 0;
s_axis_config_tvalid = 0;
s_axis_data_tvalid = 0;
s_axis_data_tlast = 0;
m_axis_data_tready = 0;
s_axis_config_tdata = 0;
s_axis_data_tdata = 0;
// Wait 100 ns for global reset to finish
#150;
s_axis_config_tvalid = 1;
s_axis_data_tvalid = 1;
//s_axis_data_tlast = 1;
m_axis_data_tready = 1;
s_axis_config_tdata = 1;
s_axis_data_tdata = 1;
// Add stimulus here
// Some random inputs (just to understand how it works):
s_axis_config_tdata = 8'b00000001; // FFT desired (and not IFFT)
s_axis_data_tdata = 32'h00005678; // I have a real input signal, so the upper half (corresponding to the immaginary part) is zero
#10;
s_axis_config_tdata = 8'b00000001;
s_axis_data_tdata = 32'h00001121;
#10;
s_axis_config_tdata = 8'b00000001;
s_axis_data_tdata = 32'h00001516;
#10;
s_axis_config_tdata = 8'b00000001;
s_axis_data_tdata = 32'h00001920;
#10;
s_axis_config_tdata = 8'b00000001;
s_axis_data_tdata = 32'h00001121;
#10;
s_axis_config_tdata = 8'b00000001;
s_axis_data_tdata = 32'h00001516;
#10;
s_axis_config_tdata = 8'b00000001;
s_axis_data_tdata = 32'h00001920;
#10;
s_axis_config_tdata = 8'b00000001;
s_axis_data_tdata = 32'h00001121;
#10;
s_axis_config_tdata = 8'b00000001;
s_axis_data_tdata = 32'h00001516;
#10;
s_axis_config_tdata = 8'b00000001;
s_axis_data_tdata = 32'h00001920;
#10;
s_axis_config_tdata = 8'b00000001;
s_axis_data_tdata = 32'h00001121;
#10;
s_axis_config_tdata = 8'b00000001;
s_axis_data_tdata = 32'h00001516;
#10;
s_axis_config_tdata = 8'b00000001;
s_axis_data_tdata = 32'h00001920;
#10;
s_axis_config_tdata = 8'b00000001;
s_axis_data_tdata = 32'h00001121;
#10;
s_axis_config_tdata = 8'b00000001;
s_axis_data_tdata = 32'h00001516;
#10;
s_axis_config_tdata = 8'b00000001;
s_axis_data_tdata = 32'h00001920;
#10;
end
endmodule
Here are some screenshots of the waveform and of the core configuration I used (I don't have yet the authority to post it directly):
https://www.dropbox.com/s/0ejccc4dm6zdw7h/FFT.zip?dl=0
Does anybody have an explanation or a working testbench (possibly written in Verilog) processing data with this ip core ?
I thank you in advance
Edit:
For posteriority, the full code is available here; details and explanations can be found in the paper.
Finally I kind of solved my problem. The core has huge latency before delivering data (several us).
So if someone else has the same problem, don't hesitate to dramatically increase the simulation time, it may solve your problem.
Related
I have a problem with my dynare/octave. When I run the code our professor uploaded (and which worked for him and my class mates, I get an error message. I found out that the error message comes from the function stoch_simul:
This is the error message.
DGGES : parameter number 19 is invalid
error: Fortran procedure terminated by call to XERBLA
Since the code should be fine, I thought it might be because of my dynare or octave. I reinstalled both but I still have the same result and tried it with Dynare 4.6.2 and 4.6.3. I have the Octave GNU version 5.2.0
I would appreciate any help!
This is my code:
var n y w i a nu r pi rast;
varexo eps_a eps_i;
parameters alpha, rho, theta, varphi, eta_a, phi_pi, phi_y, eta_i, epsilon, mu, piast, beta, kappa, psi;
alpha = 0.3; // production function
rho = 0.01; // time preference rate
varphi = 1; // inverse Frisch elasticity
theta = 1; // inverse elasticity of intertemporal substitution
eta_a = 0; // productivity shock persistence
phi_pi = 1.5; // interest rate rule parameter
phi_y = 0.125; // interest rate rule parameter
gamma = 0.667; // Calvo parameter
eta_i = 0.5; // monetary policy shock persistence
epsilon = 6; // elasticity of substitution
mu = epsilon/(epsilon-1); // mark-up
piast = 0.02; // inflation target
beta = 1/(1+rho+piast);
kappa = (1-gamma)*(1-gamma*beta)/gamma*(theta*(1-alpha)+varphi+alpha)/(1-alpha+epsilon);
psi = 1+(1-alpha)*(1-theta)/(theta*(1-alpha)+varphi+alpha);
model;
a = eta_a*a(-1) + eps_a;
nu = eta_i*nu(-1) + eps_i;
y = y(+1)-(1/theta)*(i-piast-pi(+1)-rast);
pi = beta*pi(+1) + kappa*y;
r = i - piast - pi(+1);
i = rho + piast + phi_pi*pi + phi_y*y + nu;
rast = rho + theta*psi*(a(+1)-a);
y = a + (1-alpha)*n;
w = theta*y + varphi*n;
end;
initval;
a = 0;
nu = 0;
pi = 0.02;
n = 0.3;
y = 0.6;
w = 1.5;
r = 0.05;
i = 0.03;
rast = 0.03;
end;
steady;
// check;
// Specify temporary shock
shocks;
var eps_a; stderr 0.0075;
var eps_i; stderr 0.003;
end;
stoch_simul(periods=200, drop=100, order=1, irf=12);
I'm trying to perform Excel-like auto calculations where by each row of a specific column will have a different formula.
.
Below is my attempt on it. My codes may not look professional because i'm still learning java.
Please i need a help on this.
int row = 5;
for(int a=0; a< row; a++){
pst = con.prepareStatement("select "+category+"_medicationValue,"+category+"_feedingValue,"+category+"_rawmaterialValue from "+db+".forecast where Cycle="+a+" ");
rs = pst.executeQuery();
while (rs.next()) {
double medication = rs.getDouble(category+"_medicationValue");
double feeding = rs.getDouble(category+"_feedingValue");
double rawMaterial = rs.getDouble(category+"_rawmaterialValue");
double per90M = 0.0;
double per90F = 0.0;
double per90R = 0.0;
for(int i=0; i< row; ++i){
per90M = per90M + ((medication * 0.9) + medication);
per90F = per90F + ((feeding * 0.9) + feeding);
per90R = per90R + ((rawMaterial * 0.9) + rawMaterial);
Alert alert2 = new Alert(AlertType.INFORMATION);
alert2.setTitle("Success");
alert2.setHeaderText("Forecast");
alert2.setContentText("med: "+per90M+"Feed :"+per90F+"raw M: "+per90R);
alert2.showAndWait();
pst = con.prepareStatement("update "+db+".forecast set "+category+"_medicationValue=?,"+category+"_feedingValue=?,"+category+"_rawmaterialValue=? where Cycle="+i+" ");
pst.setDouble(1, per90M);
pst.setDouble(2, per90F);
pst.setDouble(3, per90R);
pst.executeUpdate();
}}
}
Alert alert1 = new Alert(AlertType.INFORMATION);
alert1.setTitle("Success");
alert1.setHeaderText("Forecast");
alert1.setContentText(category+" details has been linearlly forecasted");
alert1.showAndWait();
}
pst.close();
rs.close();
I'm implementing an auto-encoder for anomaly detection of IoT sensor data. My data set comes from a simulation, but basically it is accelerometer data - three dimensions, one for each axis.
I'm reading it from a CSV file, column 2-4 contain the data - sorry for the code quality, it is quick and dirty:
private static DataSetIterator getTrainingData(int batchSize, Random rand) {
double[] ix = new double[nSamples];
double[] iy = new double[nSamples];
double[] iz = new double[nSamples];
double[] ox = new double[nSamples];
double[] oy = new double[nSamples];
double[] oz = new double[nSamples];
Reader in;
try {
in = new FileReader("/Users/romeokienzler/Downloads/lorenz_healthy.csv");
Iterable<CSVRecord> records;
records = CSVFormat.DEFAULT.parse(in);
int index = 0;
for (CSVRecord record : records) {
String[] recordArray = record.get(0).split(";");
ix[index] = Double.parseDouble(recordArray[1]);
iy[index] = Double.parseDouble(recordArray[2]);
iz[index] = Double.parseDouble(recordArray[3]);
ox[index] = Double.parseDouble(recordArray[1]);
oy[index] = Double.parseDouble(recordArray[2]);
oz[index] = Double.parseDouble(recordArray[3]);
index++;
}
INDArray ixNd = Nd4j.create(ix);
INDArray iyNd = Nd4j.create(iy);
INDArray izNd = Nd4j.create(iz);
INDArray oxNd = Nd4j.create(ox);
INDArray oyNd = Nd4j.create(oy);
INDArray ozNd = Nd4j.create(oz);
INDArray iNd = Nd4j.hstack(ixNd, iyNd, izNd);
INDArray oNd = Nd4j.hstack(oxNd, oyNd, ozNd);
DataSet dataSet = new DataSet(iNd, oNd);
List<DataSet> listDs = dataSet.asList();
Collections.shuffle(listDs, rng);
return new ListDataSetIterator(listDs, batchSize);
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
System.exit(-1);
return null;
}
}
This is the net:
public static void main(String[] args) {
// Generate the training data
DataSetIterator iterator = getTrainingData(batchSize, rng);
// Create the network
int numInput = 3;
int numOutputs = 3;
int nHidden = 1;
int listenerFreq = batchSize / 5;
MultiLayerConfiguration conf = new NeuralNetConfiguration.Builder().seed(seed)
.gradientNormalization(GradientNormalization.ClipElementWiseAbsoluteValue)
.gradientNormalizationThreshold(1.0).iterations(iterations).momentum(0.5)
.momentumAfter(Collections.singletonMap(3, 0.9))
.optimizationAlgo(OptimizationAlgorithm.CONJUGATE_GRADIENT).list(2)
.layer(0,
new AutoEncoder.Builder().nIn(numInput).nOut(nHidden).weightInit(WeightInit.XAVIER)
.lossFunction(LossFunction.RMSE_XENT).corruptionLevel(0.3).build())
.layer(1, new OutputLayer.Builder(LossFunction.NEGATIVELOGLIKELIHOOD).activation("softmax").nIn(nHidden)
.nOut(numOutputs).build())
.pretrain(true).backprop(false).build();
MultiLayerNetwork model = new MultiLayerNetwork(conf);
model.init();
model.setListeners(Collections.singletonList((IterationListener) new ScoreIterationListener(listenerFreq)));
for (int i = 0; i < nEpochs; i++) {
iterator.reset();
model.fit(iterator);
}
}
I'm getting the following error:
Shapes do not match: x.shape=[1, 9000], y.shape=[1, 3]
Exception in thread "main" java.lang.IllegalArgumentException: Shapes do not match: x.shape=[1, 9000], y.shape=[1, 3]
at org.nd4j.linalg.api.parallel.tasks.cpu.CPUTaskFactory.getTransformAction(CPUTaskFactory.java:92)
at org.nd4j.linalg.api.ops.executioner.DefaultOpExecutioner.doTransformOp(DefaultOpExecutioner.java:409)
at org.nd4j.linalg.api.ops.executioner.DefaultOpExecutioner.exec(DefaultOpExecutioner.java:62)
at org.nd4j.linalg.api.ndarray.BaseNDArray.subi(BaseNDArray.java:2660)
at org.nd4j.linalg.api.ndarray.BaseNDArray.subi(BaseNDArray.java:2641)
at org.nd4j.linalg.api.ndarray.BaseNDArray.sub(BaseNDArray.java:2419)
at org.deeplearning4j.nn.layers.feedforward.autoencoder.AutoEncoder.computeGradientAndScore(AutoEncoder.java:123)
at org.deeplearning4j.optimize.solvers.BaseOptimizer.gradientAndScore(BaseOptimizer.java:132)
at org.deeplearning4j.optimize.solvers.BaseOptimizer.optimize(BaseOptimizer.java:151)
at org.deeplearning4j.optimize.Solver.optimize(Solver.java:52)
at org.deeplearning4j.nn.layers.BaseLayer.fit(BaseLayer.java:486)
at org.deeplearning4j.nn.multilayer.MultiLayerNetwork.pretrain(MultiLayerNetwork.java:170)
at org.deeplearning4j.nn.multilayer.MultiLayerNetwork.fit(MultiLayerNetwork.java:1134)
at org.deeplearning4j
.examples.feedforward.autoencoder.AnomalyDetector.main(AnomalyDetector.java:136)
But I'm not defining dimension anywhere and IMHO the dimensions of input and output should be (3,3000) and (3,3000). Where is my mistake?
Thanks a lot in advance...
EDIT: UPDATE to latest release 13.9.16
I'm getting the same error (semantically), here is what I'm doing now:
private static DataSetIterator getTrainingData(int batchSize, Random rand) {
double[] ix = new double[nSamples];
double[] iy = new double[nSamples];
double[] iz = new double[nSamples];
double[] ox = new double[nSamples];
double[] oy = new double[nSamples];
double[] oz = new double[nSamples];
try {
RandomAccessFile in = new RandomAccessFile(new File("/Users/romeokienzler/Downloads/lorenz_healthy.csv"),
"r");
int index = 0;
String record;
while ((record = in.readLine()) != null) {
String[] recordArray = record.split(";");
ix[index] = Double.parseDouble(recordArray[1]);
iy[index] = Double.parseDouble(recordArray[2]);
iz[index] = Double.parseDouble(recordArray[3]);
ox[index] = Double.parseDouble(recordArray[1]);
oy[index] = Double.parseDouble(recordArray[2]);
oz[index] = Double.parseDouble(recordArray[3]);
index++;
}
INDArray ixNd = Nd4j.create(ix);
INDArray iyNd = Nd4j.create(iy);
INDArray izNd = Nd4j.create(iz);
INDArray oxNd = Nd4j.create(ox);
INDArray oyNd = Nd4j.create(oy);
INDArray ozNd = Nd4j.create(oz);
INDArray iNd = Nd4j.hstack(ixNd, iyNd, izNd);
INDArray oNd = Nd4j.hstack(oxNd, oyNd, ozNd);
DataSet dataSet = new DataSet(iNd, oNd);
List<DataSet> listDs = dataSet.asList();
Collections.shuffle(listDs, rng);
return new ListDataSetIterator(listDs, batchSize);
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
System.exit(-1);
return null;
}
}
And here the net:
// Set up network. 784 in/out (as MNIST images are 28x28).
// 784 -> 250 -> 10 -> 250 -> 784
MultiLayerConfiguration conf = new NeuralNetConfiguration.Builder().seed(12345).iterations(1)
.weightInit(WeightInit.XAVIER).updater(Updater.ADAGRAD).activation("relu")
.optimizationAlgo(OptimizationAlgorithm.STOCHASTIC_GRADIENT_DESCENT).learningRate(learningRate)
.regularization(true).l2(0.0001).list().layer(0, new DenseLayer.Builder().nIn(3).nOut(1).build())
.layer(1, new OutputLayer.Builder().nIn(1).nOut(3).lossFunction(LossFunctions.LossFunction.MSE).build())
.pretrain(false).backprop(true).build();
MultiLayerNetwork net = new MultiLayerNetwork(conf);
net.setListeners(Collections.singletonList((IterationListener) new ScoreIterationListener(1)));
// Load data and split into training and testing sets. 40000 train,
// 10000 test
DataSetIterator iter = getTrainingData(batchSize, rng);
// Train model:
int nEpochs = 30;
while (iter.hasNext()) {
DataSet ds = iter.next();
for (int epoch = 0; epoch < nEpochs; epoch++) {
net.fit(ds.getFeatures(), ds.getLabels());
System.out.println("Epoch " + epoch + " complete");
}
}
My error is:
Exception in thread "main" java.lang.IllegalStateException: Mis matched lengths: [9000] != [3]
at org.nd4j.linalg.util.LinAlgExceptions.assertSameLength(LinAlgExceptions.java:39)
at org.nd4j.linalg.api.ndarray.BaseNDArray.subi(BaseNDArray.java:2786)
at org.nd4j.linalg.api.ndarray.BaseNDArray.subi(BaseNDArray.java:2767)
at org.nd4j.linalg.api.ndarray.BaseNDArray.sub(BaseNDArray.java:2547)
at org.deeplearning4j.nn.layers.BaseOutputLayer.getGradientsAndDelta(BaseOutputLayer.java:182)
at org.deeplearning4j.nn.layers.BaseOutputLayer.backpropGradient(BaseOutputLayer.java:161)
at org.deeplearning4j.nn.multilayer.MultiLayerNetwork.calcBackpropGradients(MultiLayerNetwork.java:1125)
at org.deeplearning4j.nn.multilayer.MultiLayerNetwork.backprop(MultiLayerNetwork.java:1077)
at org.deeplearning4j.nn.multilayer.MultiLayerNetwork.computeGradientAndScore(MultiLayerNetwork.java:1817)
at org.deeplearning4j.optimize.solvers.BaseOptimizer.gradientAndScore(BaseOptimizer.java:152)
at org.deeplearning4j.optimize.solvers.StochasticGradientDescent.optimize(StochasticGradientDescent.java:54)
at org.deeplearning4j.optimize.Solver.optimize(Solver.java:51)
at org.deeplearning4j.nn.multilayer.MultiLayerNetwork.fit(MultiLayerNetwork.java:1445)
at org.deeplearning4j.examples.feedforward.anomalydetection.IoTAnomalyExample.main(IoTAnomalyExample.java:110)
I'm pretty sure I'm messing up with the training data - the shape of the training data is 3000 rows, 3 columns - same for the target (the very same data because I want to build an autoencoder) - test data can be found here:
https://pmqsimulator-romeokienzler-2310.mybluemix.net/data
Any ideas?
Thanks to Alex Black of Skymind, this is the solution (got the shape wrong)
INDArray ixNd = Nd4j.create(ix, new int[]{3000,1});
INDArray iyNd = Nd4j.create(iy, new int[]{3000,1});
INDArray izNd = Nd4j.create(iz, new int[]{3000,1});
INDArray oxNd = Nd4j.create(ox, new int[]{3000,1});
INDArray oyNd = Nd4j.create(oy, new int[]{3000,1});
INDArray ozNd = Nd4j.create(oz, new int[]{3000,1});
What gives the best performance? (edited to include another option)
var myVec:Vector.<Number> = new Vector.<Number>();
for (...)
// Do stuff - example: myVec.push(Math.random());
// Need to empty and repopulate vector
myVec.splice(0, myVec.length);
// OR
myVec = new Vector.<Number>();
// OR
myVec.length = 0;
I have heard that:
myVec.length=0
is the fastest way... but i never verify this point
Using the code below I've found myVec = new Vector.<T>() is the fastest way.
Output (average times in ms to empty a Vector.<Number> with 100000 random entries):
0.679 // using splice()
0.024 // using new Vector.<T>()
0.115 // using .length = 0;
Code:
var myVec:Vector.<Number> = new Vector.<Number>();
var emptyTime:int;
var startTime:int;
var totalEmptyTime:Number = 0;
// Need to empty and repopulate vector
const NUM_TRIALS:int = 1000;
var j:int;
for(j = 0; j < NUM_TRIALS; j++)
{
fillVector();
startTime = getTimer();
myVec.splice(0, myVec.length);
emptyTime = getTimer() - startTime;
totalEmptyTime += emptyTime;
}
trace(totalEmptyTime/NUM_TRIALS);
totalEmptyTime = 0;
// OR
for(j = 0; j < NUM_TRIALS; j++)
{
fillVector();
startTime = getTimer();
myVec = new Vector.<Number>();
emptyTime = getTimer() - startTime;
totalEmptyTime += emptyTime;
}
trace(totalEmptyTime/NUM_TRIALS);
totalEmptyTime = 0;
// OR
for(j = 0; j < NUM_TRIALS; j++)
{
fillVector();
startTime = getTimer();
myVec.length = 0;
emptyTime = getTimer() - startTime;
totalEmptyTime += emptyTime;
}
trace(totalEmptyTime/NUM_TRIALS);
function fillVector():void
{
for (var i:int = 0; i < 100000; i++)
myVec.push(Math.random());
}
I'm trying to create a simple text with a drop shadow in ActionScript 3.0; for example:
_tf = new TextField();
_tf.autoSize = TextFieldAutoSize.CENTER;
_tf.selectable = false;
var format:TextFormat = new TextFormat();
format.font = "Arial";
format.bold = true;
format.color = 0xffffff;
format.size = 12;
_tf.text = "Drop shadow";
_tf.defaultTextFormat = format;
addChild(_tf);
How can i get this text with a drop shadow??
_tf.filters = [new DropShadowFilter()];
or even better;
_tf.filters = [filter(4,153,0xffffff,0.7,4,4,0.7,0.15,false,false,false)];
function filter(distance, angle, color, alpha, blurX, blurY, strength, quality, inner, knockout, hideObject){
return new DropShadowFilter(distance, angle, color, alpha, blurX, blurY, strength, quality, inner, knockout, hideObject);
}
other way:
_tf.filters = [new DropShadowFilter(4.0,45,0x000000,1.0,4.0,4.0,1.0,1,false,true,false)];
/*
DropShadowFilter(distance:Number = 4.0, angle:Number = 45, color:uint = 0, alpha:Number = 1.0, blurX:Number = 4.0, blurY:Number = 4.0, strength:Number = 1.0, quality:int = 1, inner:Boolean = false, knockout:Boolean = false, hideObject:Boolean = false);
DropShadowFilter
distance:Number = 4.0
angle:Number = 45
color:uint = 0
alpha:Number = 1.0
blurX:Number = 4.0
blurY:Number = 4.0
strength:Number = 1.0
quality:int = 1
inner:Boolean = false
knockout:Boolean = false
hideObject:Boolean = false
*/
documentation here: http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/filters/DropShadowFilter.html