Develop a Python method change(amount) that for any integer amount in the range from 24 to 1000 returns a list consisting of numbers 5 and 7 only, such that their sum is equal to amount. For example, change(28) may return [7, 7, 7, 7], while change(49) may return [7, 7, 7, 7, 7, 7, 7] or [5, 5, 5, 5, 5, 5, 5, 7, 7] or [7, 5, 5, 5, 5, 5, 5, 5, 7].
To solve this quiz, implement the method change(amount) on your machine, test it on several inputs, and then paste your code in the field below and press the submit quiz button. Your submission should contain the change method only (in particular, make sure to remove all print statements).
Just started programming, quite proud of this. Here you go:
To use: print(change(amount))
def change(amount):
if amount < 24 or amount > 1000:
return 'error'
array = []
while True:
if (amount/5).is_integer():
for i in range(int(amount/5)):
array.append(5)
return array
array.append(7)
amount += -7
while amount > 0:
break
Related
I am trying to understand a conceptual approach to integrating data into a stack of observation frames that don't have the same dimensionality as the frames.
Example Frame: [1, 2, 3]
Example extra data: [a, b]
Currently, I am approaching this as follows, with the example of 3 frames (rows) representing temporal observation data over 3 time periods, and a 4th frame (row) representing non-temporal data for which only the most recent observed values are needed.
Example:
[
[1, 2, 3],
[4, 5, 6],
[7, 8, 9],
[a, b, NaN]
]
The a and b are the added data and the NaN is just a value added to match the dimensions of the existing data. Would there be differences (all inputs welcomed) in using NaN vs an outlier value like -1 that would never be observed by other measures?
One possible alternative would be to structure the observation data as such:
[
[1, 2, 3, a, b],
[4, 5, 6, a-1, b-1],
[7, 8, 9, a-2, b-3],
]
It seems this would be a noticeable increase in resources and the measures (in my context) of a and b can be universally understood as "bigger is better" or "smaller is better" without context from the other data values.
I have some code that I have been porting from 2.7 to 3.6/3.7. Most of the unit tests, which have a pretty good coverage, already execute successfully under 3.x. But I have yet to fully commit to switching over to 3.x for development.
I recently noticed, when running black - the code formatter that it chokes if my code would not compile under 3.x, with a message about 3.6 AST-based parsing failing.
Is black a reliable indicator of 3.x-readiness, at least at the syntax level? I know that 2to3 is the tool to use. And I know that for example, it would not catch differences in the standard library (basestring disappearing, StringIO.StringIO becoming io.StringIO, etc...).
but it seems nice that a code formatter could incidentally help out as well.
very basic sample, invalid syntax for 3.x:
print "a", 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21
gives:
error: cannot format test_black.py:
cannot use --safe with this file; failed to parse source file with
Python 3.6's builtin AST.
Re-run with --fast or stop using deprecated Python 2 syntax.
AST error message: Missing parentheses in call to 'print'.
Did you mean print("a", 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21)? (<unknown>, line 1)
All done! 💥 💔 💥
1 file failed to reformat.
fix the syntax to 3.x and it works.
If I do the right thing, and add parenthesis print ("a", 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21), then all's well:
reformatted test_black.py
All done! ✨ 🍰 ✨
1 file reformatted.
I'm building a function that replaces a limited number of old values with new values in a list (xs) and returns the replaced numbers as a new list (new_xs). I am failing most test cases. I have provided two examples of expected output and two examples of failing test cases.
Example:
Limit= None (means replace all old values) xs=[1,2,1,3,1,4,1,5,1] old= 1 new=2 --> new_xs[2,2,2,3,2,4,2,5,2]
Limit= 0 or negative means do not alter anything in the list
Limit=2 means only replace two old values and leave rest untouched.
Here is my non-working code:
def replace(xs, old, new, limit=None):
new_xs=[]
replacements=0
for num in xs:
if num == old and (limit is None or replacements<limit):
new_xs.append(new)
replacements+=1
else:
new_xs.append(num)
return new_xs
Still fails 6 tests:
AssertionError: [] != None
AssertionError: Lists differ: [9, 2, 9, 3, 9, 4, 9, 5, 9] != [-10, 2, -10, 3, -10, 4, -10, 5, -10]
I have some data files with content
a1 b1 c1 d1
a1 b2 c2 d2
...
[blank line]
a2 b1 c1 d1
a2 b2 c2 d2
...
I plot this with gnuplot using
splot 'file' u 1:2:3:4 w pm3d.
Now, I want to use a binary file. I created the file with Fortran using unformatted stream-access (direct or sequential access did not work directly). By using gnuplot with
splot 'file' binary format='%float%float%float%float' u 1:2:3
I get a normal 3D-plot. However, the pm3d-command does not work as I don't have the blank lines in the binary file. I get the error message:
>splot 'file' binary format='%float%float%float%float' u 1:2:3:4 w pm3d
Warning: Single isoline (scan) is not enough for a pm3d plot.
Hint: Missing blank lines in the data file? See 'help pm3d' and FAQ.
According to the demo script in http://gnuplot.sourceforge.net/demo/image2.html, I have to specify the record length (which I still don't understand right). However, using this script from the demo page and the command with pm3d obtains the same error message:
splot 'scatter2.bin' binary record=30:30:29:26 u 1:2:3 w pm3d
So how is it possible to plot this four dimensional data from a binary file correctly?
Edit: Thanks, mgilson. Now it works fine. Just for the record: My fortran code-snippet:
open(unit=83,file=fname,action='write',status='replace',access='stream',form='unformatted')
a= 0.d0
b= 0.d0
do i=1,200
do j=1,100
write(83)real(a),real(b),c(i,j),d(i,j)
b = b + db
end do
a = a + da
b = 0.d0
end do
close(83)
The gnuplot commands:
set pm3d map
set contour
set cntrparam levels 20
set cntrparam bspline
unset clabel
splot 'fname' binary record=(100,-1) format='%float' u 1:2:3:4 t 'd as pm3d-projection, c as contour'
Great question, and thanks for posting it. This is a corner of gnuplot I hadn't spent much time with before. First, I need to generate a little test data -- I used python, but you could use fortran just as easily:
Note that my input array (b) is just a 10x10 array. The first two "columns" in the datafile are just the index (i,j), but you could use anything.
>>> import numpy as np
>>> a = np.arange(10)
>>> b = a[None,:]+a[:,None]
>>> b
array([[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9],
[ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10],
[ 2, 3, 4, 5, 6, 7, 8, 9, 10, 11],
[ 3, 4, 5, 6, 7, 8, 9, 10, 11, 12],
[ 4, 5, 6, 7, 8, 9, 10, 11, 12, 13],
[ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14],
[ 6, 7, 8, 9, 10, 11, 12, 13, 14, 15],
[ 7, 8, 9, 10, 11, 12, 13, 14, 15, 16],
[ 8, 9, 10, 11, 12, 13, 14, 15, 16, 17],
[ 9, 10, 11, 12, 13, 14, 15, 16, 17, 18]])
>>> with open('foo.dat','wb') as foo:
... for (i,j),dat in np.ndenumerate(b):
... s = struct.pack('4f',i,j,dat,dat)
... foo.write(s)
...
So here I just write 4-floating point values to the file for each data-point. Again, this is what you've already done using fortran. Now for plotting it:
splot 'foo.dat' binary record=(10,-1) format='%float' u 1:2:3:4 w pm3d
I believe that this specifies that each "scan" is a "record". Since I know that each scan will be 10 floats long, that becomes the first index in the record list. The -1 indicates that gnuplot should keep reading records until it finds the end of the file.
I am wondering if Cuda would be useful for this type of problem(and how to approach it in Cuda). Basically I have been using python to find combinations of a list but as the data gets large I'm thinking running it on a gpu maybe an interesting idea.
Say I have a list [1, 2, 3,4,5,6,7,8] and I want only 7 combinations then I would get:
(1, 2, 3, 4, 5, 6, 7)
(1, 2, 3, 4, 5, 6, 8)
(1, 2, 3, 4, 5, 7, 8)
(1, 2, 3, 4, 6, 7, 8)
(1, 2, 3, 5, 6, 7, 8)
(1, 2, 4, 5, 6, 7, 8)
(1, 3, 4, 5, 6, 7, 8)
(2, 3, 4, 5, 6, 7, 8)
As the data gets larger it takes a long time. I have been using itertools.combinations which abstracts everything from me so if I try to program this myself is there any resources or proxy code I can look at? Most of the algorithms related to combinations are recursive and my Cuda card does not support recursions.
Any suggestion/tips on where to start?
I have done a small CUDA project that does bin packing by trying permutations:
http://www.dahlsys.com/software/fill_media/index.html