#numba

2025-03-24

Бэктестинг торговых стратегий на Python с помощью Numba. Когда перевод расчетов на GPU действительно оправдан?

Бэктестинг — ключевой процесс в алгоритмической торговле. Он позволяет проверить стратегию на исторических данных, прежде чем запускать её в реальной торговле. Однако, чем больше данных и сложнее логика стратегии, тем дольше времени занимают вычисления. Особенно если стратегия анализирует тиковые данные и требуется протестировать множество комбинаций гиперпараметров стратегии, время вычислений может расти экспоненциально. В этой статье мы разберем, как реализовать бэктестинг на чистом Python, посмотрим сколько времени могут занимать вычисления, а также попробуем найти разные способы оптимизации. Python, как известно — это интерпретируемый язык, что означает, что код выполняется построчно во время исполнения программы, а не компилируется в машинный код заранее, как это происходит, например, в C или C++. Это делает разработку быстрее и удобнее, так как можно сразу видеть результаты выполнения кода и легко отлаживать программы. Но этот же факт, в свою очередь, приводит к тому, что Python заметно уступает в скорости более низкоуровневым языкам. К тому же Python использует динамическую типизацию, что требует дополнительных проверок и снижает производительность и если данных очень много, это может приводить к значительным сложностям, связанным с увеличением времени вычислений. Как же использовать ту легкость и скорость разработки Python и при этом сохранить адекватное время вычислений на больших объемах данных? В этой статье мы увидим, насколько перенос вычислений на GPU может увеличить производительность вычислений.

habr.com/ru/articles/893748/

#python #cuda #numba #gpu #backtesting #производительность

2025-01-29

@isaaclyman
That's pretty much how #Python optimizing compilers like #cython, #mypyc, #numba, and #TaichiLang work, and iiuc is the idea behind #MOJOlang.

As for leaky abstractions, I'd mitigate that by moving the lower-level algorithms into a separate module and limit the optimization pass to that module. Higher-level modules, like CLI entry points or API server route handlers, shouldn't need the extra optimization.

2025-01-28

Coming soon 🥁 #python #numba #rust

In [9]: %timeit farnocchia_coe_numba(k, p, ecc, inc, raan, argp, nu, tof)
445 ns ± 2.3 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each)

In [10]: %timeit farnocchia_coe_rust(k, p, ecc, inc, raan, argp, nu, tof)
271 ns ± 1.83 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each)
Nithin PSps_nithin
2025-01-06

Made a python library called which implements the idea of abstraction of data. The idea is explained in github.com/ps-nithin/pyrebel/b. The program runs on an and uses library for using with . I have demo programs for image abstraction and edge detection at github.com/ps-nithin/pyrebel

Thanks,

Alexandre B A Villares 🐍villares@ciberlandia.pt
2024-12-31

@nen while still on Python, have you tried scientific computation speed up things like #Numba ? Also, I once saw a #Cython talk and I was almost convinced to give it a go (but I'm too lazy) :D

2024-11-17

🚀 Parallel Python Made Easy! 🐍

We're hosting a hands-on tutorial on PyOMP, a system bringing OpenMP parallelism to Python! By combining OpenMP directives (as strings) with Numba's JIT compiler, PyOMP taps into LLVM's OpenMP support, delivering C-like performance in Python's simplicity.

Our participants are mastering this game-changing tool to supercharge their workflows.

Stay tuned for updates!

#Python #OpenMP #PyOMP #ParallelComputing #Numba #HPC

Christos Argyropoulos MD, PhD, FASN 🇺🇸christosargyrop.bsky.social@bsky.brid.gy
2024-10-28

For whatever is worth, a couple of comparisons for numerical tasks involving #python (all varieties, base language, #numba, #numpy) , #rstats, and (you read it correctly) #perl chrisarg.github.io/Killing-It-w... #perl has an autothreading library, #pdl that's 🔥 chrisarg.github.io/Killing-It-w...

The Quest for Performance Part...

2024-09-29

I am doing github.com/srush/GPU-Puzzles/

My solution for puzzle 9 "Pooling" is

def pool_test(cuda):
def call(out, a, size) -> None:
shared = cuda.shared.array(TPB, numba.float32)
i = cuda.blockIdx.x * cuda.blockDim.x + cuda.threadIdx.x
local_i = cuda.threadIdx.x

mysum = 0
for j in range(max(0,i-2), i+1):
mysum += a[j]
out[i] = mysum

return call

But I doubt that this is the solution, as I don't utilize the shared memory...

#programming #python #numba

2024-08-18

Nils Aall Barricelli's pioneering work on #ALife simulations in the 1950s laid the foundation for artificial life research. His experiments simulated digital symbio-organisms as simple integer numbers. Made with #python, #matplotlib, #numpy, #numba.

Nils Aall Barricelli Alife Simulation
of a One-Dimensional World Composed of 4,000 Cells. The simulation shows a lot of interaction between Barricelli symbio-organisms, some selection, self-reproduction and extinction phenomena.
2024-08-18

Nils Aall Barricelli's pioneering work on #ALife simulations in the 1950s laid the foundation for artificial life research. His experiments simulated digital symbio-organisms as simple integer numbers.
Reference paper in Italian: Civiltà delle macchine, Giugno 1955, "5400 generazioni. Esperimenti di evoluzione realizzati su organismi numerici" Nils Aall Barricelli
Made with #python, #matplotlib, #numpy, #numba

Juan Nunez-Iglesiasjni@fosstodon.org
2024-07-26

Reminder: @numba is an *excellent* accelerator for numeric computations in Python. Also, issues with Python's stability are greatly exaggerated. Case in point: I just ran my 8yo n-body benchmarks without modification:

github.com/jni/nbody-numba

and they were just 10% slower than the fastest C code — and that's including all the Python launch time and JIT warmup!

#numba #Python

timing results of running an n-body simulation written in C (taking 3.204 seconds) and one written in Python using the numba just-in-time compiler (3.636 seconds).
Christos ArgyropoulosChristosArgyrop@mast.hpc.social
2024-07-13

The final installment in the series:
"The-Quest-For-Performance" from my blog, discussing #python #numpy #numba, #rstats @openmp_arb #openMP enhancements of #Perl code and #simd

Bottom line: I will not be migrating to Python anytime soon.

Food for thought: The Perl interpreter (and many of the modules) are deep down massive C programs. Perhaps one can squeeze real performance kicks by looking into alternative compilers, compiler flags & pragmas ?

chrisarg.github.io/Killing-It-

Christos Argyropoulos MD, PhDChristosArgyrop@mstdn.science
2024-07-13

The final installment in the series:
"The-Quest-For-Performance" from my blog, discussing #python #numpy #numba, #rstats @openmp_arb #openMP enhancements of #Perl code and #simd

Bottom line: I will not be migrating to Python anytime soon.

Food for thought: The Perl interpreter (and many of the modules) are deep down massive C programs. Perhaps one can squeeze real performance kicks by looking into alternative compilers, compiler flags & pragmas ?

chrisarg.github.io/Killing-It-

Itamar Turner-Trauringitamarst@hachyderm.io
2024-07-11

Trying to get github.com/pythonspeed/profila working on macOS.

Good news: I've gotten debugger-based sampling using lldb, which means it's usable on macOS.

Bad news: lldb is ludicrously slow. It takes 100ms to take a sample. And that's not time the program is running, that's just lldb. So if you spend 50% time in lldb, and 50% time actually running, that's 5 Hz sampling rate, not exactly the best rate for profiling.

(For comparison, I'm easily getting 50Hz from gdb).

#python #numba

Christos Argyropoulos MD, PhDChristosArgyrop@mstdn.science
2024-07-07

A couple of data/compute intensive examples using Perl Data Language (#PDL), #OpenMP, #Perl, Inline and #Python (base, #numpy, #numba). Kind of interesting to see Python eat Perl's dust and PDL being equal to numpy.
@openmp_arb and Perl's multithreaded #PDL array language were the clear winners here.

@Perl

chrisarg.github.io/Killing-It-

chrisarg.github.io/Killing-It-

2024-06-03

Very nice example of optimizing code, and showing at least one example of how to make a python program ultra fast. #python #optimization #numpy #numba #source-code #howto #data-analysis : "Analyzing Data 170,000x Faster with Python"(sidsite.com/posts/python-corrs)

Linuxiaclinuxiac
2024-04-27

WARNING: Upgrading to Ubuntu 24.04 LTS (Noble Numbat) now risks system crashes, so hold off. Fixes coming soon.
linuxiac.com/do-not-try-to-upg

the magnificent rhysrhys@rhys.wtf
2024-04-21

I've just been introduced to #numba and... is this as revolutionary as it seems?

#python #programming

numba.readthedocs.io/en/stable

2024-04-21

#PyQtGraph 0.13.5 is out! While not our largest release, it has something for everyone! Note, this release is the last to support #Python
3.9 and #NumPy 1.22.

First, ImageItem got another substantial performance boost, especially if you're using #numba but there is a significant boost for #NumPy users as well.

A ColorMapMenu was added to ColorBarItem, allowing for users to be able to change color maps interactively instead of programmatically.

#DataViz #PyQt #Qt

Madison Pythonmadpy@fosstodon.org
2024-03-28

Calling all pythonistas who are interested in executing code on the GPU (but haven't had an interest in learning CUDA C), come to our April MadPy gathering to learn about what @numba can do for you!

We'll be meeting at the downtown Madison Public Library. Attendance is free and open to all. Free pizza and beverages will also be provided 🍕 🥤

Looking forward to seeing everybody!

madpy.com/meetups/2024/4/11/20

#MadisonWI #MachineLearning #UWMadison #GPU #Numba #Python #CUDA

Client Info

Server: https://mastodon.social
Version: 2025.04
Repository: https://github.com/cyevgeniy/lmst